Search results for: Low Pass Filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 752

Search results for: Low Pass Filter

32 Multiparametric Optimization of Water Treatment Process for Thermal Power Plants

Authors: B. Mukanova, N. Glazyrina, S. Glazyrin

Abstract:

The formulated problem of optimization of the technological process of water treatment for thermal power plants is considered in this article. The problem is of multiparametric nature. To optimize the process, namely, reduce the amount of waste water, a new technology was developed to reuse such water. A mathematical model of the technology of wastewater reuse was developed. Optimization parameters were determined. The model consists of a material balance equation, an equation describing the kinetics of ion exchange for the non-equilibrium case and an equation for the ion exchange isotherm. The material balance equation includes a nonlinear term that depends on the kinetics of ion exchange. A direct problem of calculating the impurity concentration at the outlet of the water treatment plant was numerically solved. The direct problem was approximated by an implicit point-to-point computation difference scheme. The inverse problem was formulated as relates to determination of the parameters of the mathematical model of the water treatment plant operating in non-equilibrium conditions. The formulated inverse problem was solved. Following the results of calculation the time of start of the filter regeneration process was determined, as well as the period of regeneration process and the amount of regeneration and wash water. Multi-parameter optimization of water treatment process for thermal power plants allowed decreasing the amount of wastewater by 15%.

Keywords: Direct problem, multiparametric optimization, optimization parameters, water treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2137
31 The Use of Information and Communication Technologies in Electoral Procedures: Comments on Electronic Voting Security

Authors: Magdalena Musiał-Karg

Abstract:

The expansion of telecommunication and progress of electronic media constitute important elements of our times. The recent worldwide convergence of information and communication technologies (ICT) and dynamic development of the mass media is leading to noticeable changes in the functioning of contemporary states and societies. Currently, modern technologies play more and more important roles and filter down to almost every field of contemporary human life. It results in the growth of online interactions that can be observed by the inconceivable increase in the number of people with home PCs and Internet access. The proof of it is undoubtedly the emergence and use of concepts such as e-society, e-banking, e-services, e-government, e-government, e-participation and e-democracy. The newly coined word e-democracy evidences that modern technologies have also been widely used in politics. Without any doubt in most countries all actors of political market (politicians, political parties, servants in political/public sector, media) use modern forms of communication with the society. Most of these modern technologies progress the processes of getting and sending information to the citizens, communication with the electorate, and also – which seems to be the biggest advantage – electoral procedures. Thanks to implementation of ICT the interaction between politicians and electorate are improved. The main goal of this text is to analyze electronic voting (e-voting) as one of the important forms of electronic democracy in terms of security aspects. The author of this paper aimed at answering the questions of security of electronic voting as an additional form of participation in elections and referenda.

Keywords: Electronic democracy, electronic participation, electronic voting, security of e-voting, ICT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1068
30 Applying Case-Based Reasoning in Supporting Strategy Decisions

Authors: S. M. Seyedhosseini, A. Makui, M. Ghadami

Abstract:

Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.

Keywords: Case based reasoning, Genetic algorithm, Groupdecision making, Product management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173
29 FACTS Based Stabilization for Smart Grid Applications

Authors: Adel M. Sharaf, Foad H. Gandoman

Abstract:

Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PVhybrid- Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid- Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6- pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.

Keywords: AC FACTS, Smart grid, Stabilization, PV-Battery Storage, Switched Filter-Compensation (SFC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3247
28 Unsupervised Segmentation Technique for Acute Leukemia Cells Using Clustering Algorithms

Authors: N. H. Harun, A. S. Abdul Nasir, M. Y. Mashor, R. Hassan

Abstract:

Leukaemia is a blood cancer disease that contributes to the increment of mortality rate in Malaysia each year. There are two main categories for leukaemia, which are acute and chronic leukaemia. The production and development of acute leukaemia cells occurs rapidly and uncontrollable. Therefore, if the identification of acute leukaemia cells could be done fast and effectively, proper treatment and medicine could be delivered. Due to the requirement of prompt and accurate diagnosis of leukaemia, the current study has proposed unsupervised pixel segmentation based on clustering algorithm in order to obtain a fully segmented abnormal white blood cell (blast) in acute leukaemia image. In order to obtain the segmented blast, the current study proposed three clustering algorithms which are k-means, fuzzy c-means and moving k-means algorithms have been applied on the saturation component image. Then, median filter and seeded region growing area extraction algorithms have been applied, to smooth the region of segmented blast and to remove the large unwanted regions from the image, respectively. Comparisons among the three clustering algorithms are made in order to measure the performance of each clustering algorithm on segmenting the blast area. Based on the good sensitivity value that has been obtained, the results indicate that moving kmeans clustering algorithm has successfully produced the fully segmented blast region in acute leukaemia image. Hence, indicating that the resultant images could be helpful to haematologists for further analysis of acute leukaemia.

Keywords: Acute Leukaemia Images, Clustering Algorithms, Image Segmentation, Moving k-Means.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2789
27 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: Airborne laser scanning, digital terrain models, filtering, forested areas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718
26 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications

Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso

Abstract:

The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.

Keywords: Interferometry, MIMO RADAR, SAR, tomography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
25 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794
24 An Approach to Image Extraction and Accurate Skin Detection from Web Pages

Authors: Moheb R. Girgis, Tarek M. Mahmoud, Tarek Abd-El-Hafeez

Abstract:

This paper proposes a system to extract images from web pages and then detect the skin color regions of these images. As part of the proposed system, using BandObject control, we built a Tool bar named 'Filter Tool Bar (FTB)' by modifying the Pavel Zolnikov implementation. The Yahoo! Team provides us with the Yahoo! SDK API, which also supports image search and is really useful. In the proposed system, we introduced three new methods for extracting images from the web pages (after loading the web page by using the proposed FTB, before loading the web page physically from the localhost, and before loading the web page from any server). These methods overcome the drawback of the regular expressions method for extracting images suggested by Ilan Assayag. The second part of the proposed system is concerned with the detection of the skin color regions of the extracted images. So, we studied two famous skin color detection techniques. The first technique is based on the RGB color space and the second technique is based on YUV and YIQ color spaces. We modified the second technique to overcome the failure of detecting complex image's background by using the saturation parameter to obtain an accurate skin detection results. The performance evaluation of the efficiency of the proposed system in extracting images before and after loading the web page from localhost or any server in terms of the number of extracted images is presented. Finally, the results of comparing the two skin detection techniques in terms of the number of pixels detected are presented.

Keywords: Browser Helper Object, Color spaces, Image and URL extraction, Skin detection, Web Browser events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1894
23 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976
22 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction

Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz

Abstract:

This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.

Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905
21 SMaTTS: Standard Malay Text to Speech System

Authors: Othman O. Khalifa, Zakiah Hanim Ahmad, Teddy Surya Gunawan

Abstract:

This paper presents a rule-based text- to- speech (TTS) Synthesis System for Standard Malay, namely SMaTTS. The proposed system using sinusoidal method and some pre- recorded wave files in generating speech for the system. The use of phone database significantly decreases the amount of computer memory space used, thus making the system very light and embeddable. The overall system was comprised of two phases the Natural Language Processing (NLP) that consisted of the high-level processing of text analysis, phonetic analysis, text normalization and morphophonemic module. The module was designed specially for SM to overcome few problems in defining the rules for SM orthography system before it can be passed to the DSP module. The second phase is the Digital Signal Processing (DSP) which operated on the low-level process of the speech waveform generation. A developed an intelligible and adequately natural sounding formant-based speech synthesis system with a light and user-friendly Graphical User Interface (GUI) is introduced. A Standard Malay Language (SM) phoneme set and an inclusive set of phone database have been constructed carefully for this phone-based speech synthesizer. By applying the generative phonology, a comprehensive letter-to-sound (LTS) rules and a pronunciation lexicon have been invented for SMaTTS. As for the evaluation tests, a set of Diagnostic Rhyme Test (DRT) word list was compiled and several experiments have been performed to evaluate the quality of the synthesized speech by analyzing the Mean Opinion Score (MOS) obtained. The overall performance of the system as well as the room for improvements was thoroughly discussed.

Keywords: Natural Language Processing, Text-To-Speech (TTS), Diphone, source filter, low-/ high- level synthesis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
20 Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization

Authors: V. H. Mankar, T. S. Das, S. K. Sarkar

Abstract:

In this paper, we have proposed a novel blind watermarking architecture towards its hardware implementation in VLSI. In order to facilitate this hardware realization, cellular automata (CA) concept is introduced. The CA has been already accepted as an attractive structure for VLSI implementation because of its modularity, parallelism, high performance and reliability. The hardware realizable multiresolution spread spectrum watermarking techniques are very few in numbers in spite of their best ever resiliency against signal impairments. This is because of the computational cost and complexity associated with their different filter banks and lifting techniques. The concept of cellular automata theory in order to form a new transform domain technique i.e. Cellular Automata Transform (CAT) have been incorporated. Since CA provides spreading sequences having very low cross-correlation properties, the CA based pseudorandom sequence generator is considered in the present work. Considering the watermarking technique as a digital communication process, an error control coding (ECC) must be incorporated in the data hiding schemes. Besides the hardware implementation of entire CA based data hiding technique, the individual blocks of the algorithm using CA provide the best result than that of some other methods irrespective of the hardware and software technique. The Cellular Automata Transform, CA based PN sequence generator, and CA ECC are the requisite blocks that are developed not only to meet the reliable hardware requirements but also for the basic spread spectrum watermarking features. The proposed algorithm shows statistical invisibility and resiliency against various common signal-processing operations. This algorithmic design utilizes the existing allocated bandwidth in the data transmission channel in a more efficient manner.

Keywords: Cellular automata, watermarking, error control coding, PN sequence, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066
19 Financial Regulations in the Process of Global Financial Crisis and Macroeconomics Impact of Basel III

Authors: M. Okan Tasar

Abstract:

Basel III (or the Third Basel Accord) is a global regulatory standard on bank capital adequacy, stress testing and market liquidity risk agreed upon by the members of the Basel Committee on Banking Supervision in 2010-2011, and scheduled to be introduced from 2013 until 2018. Basel III is a comprehensive set of reform measures. These measures aim to; (1) improve the banking sector-s ability to absorb shocks arising from financial and economic stress, whatever the source, (2) improve risk management and governance, (3) strengthen banks- transparency and disclosures. Similarly the reform target; (1) bank level or micro-prudential, regulation, which will help raise the resilience of individual banking institutions to periods of stress. (2) Macro-prudential regulations, system wide risk that can build up across the banking sector as well as the pro-cyclical implication of these risks over time. These two approaches to supervision are complementary as greater resilience at the individual bank level reduces the risk system wide shocks. Macroeconomic impact of Basel III; OECD estimates that the medium-term impact of Basel III implementation on GDP growth is in the range -0,05 percent to -0,15 percent per year. On the other hand economic output is mainly affected by an increase in bank lending spreads as banks pass a rise in banking funding costs, due to higher capital requirements, to their customers. Consequently the estimated effects on GDP growth assume no active response from monetary policy. Basel III impact on economic output could be offset by a reduction (or delayed increase) in monetary policy rates by about 30 to 80 basis points. The aim of this paper is to create a framework based on the recent regulations in order to prevent financial crises. Thus the need to overcome the global financial crisis will contribute to financial crises that may occur in the future periods. In the first part of the paper, the effects of the global crisis on the banking system examine the concept of financial regulations. In the second part; especially in the financial regulations and Basel III are analyzed. The last section in this paper explored the possible consequences of the macroeconomic impacts of Basel III.

Keywords: Banking Systems, Basel III, Financial regulation, Global Financial Crisis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2287
18 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network

Authors: Z. Abdollahi Biron, P. Pisu

Abstract:

Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.

Keywords: Fault diagnostics, communication network, connected vehicles, packet drop out, platoon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
17 Inner and Outer School Contextual Factors Associated with Poor Performance of Grade 12 Students: A Case Study of an Underperforming High School in Mpumalanga, South Africa

Authors: Victoria L. Nkosi, Parvaneh Farhangpour

Abstract:

Often a Grade 12 certificate is perceived as a passport to tertiary education and the minimum requirement to enter the world of work. In spite of its importance, many students do not make this milestone in South Africa. It is important to find out why so many students still fail in spite of transformation in the education system in the post-apartheid era. Given the complexity of education and its context, this study adopted a case study design to examine one historically underperforming high school in Bushbuckridge, Mpumalanga Province, South Africa in 2013. The aim was to gain a understanding of the inner and outer school contextual factors associated with the high failure rate among Grade 12 students.  Government documents and reports were consulted to identify factors in the district and the village surrounding the school and a student survey was conducted to identify school, home and student factors. The randomly-sampled half of the population of Grade 12 students (53) participated in the survey and quantitative data are analyzed using descriptive statistical methods. The findings showed that a host of factors is at play. The school is located in a village within a municipality which has been one of the poorest three municipalities in South Africa and the lowest Grade 12 pass rate in the Mpumalanga province.   Moreover, over half of the families of the students are single parents, 43% are unemployed and the majority has a low level of education. In addition, most families (83%) do not have basic study materials such as a dictionary, books, tables, and chairs. A significant number of students (70%) are over-aged (+19 years old); close to half of them (49%) are grade repeaters. The school itself lacks essential resources, namely computers, science laboratories, library, and enough furniture and textbooks. Moreover, teaching and learning are negatively affected by the teachers’ occasional absenteeism, inadequate lesson preparation, and poor communication skills. Overall, the continuous low performance of students in this school mirrors the vicious circle of multiple negative conditions present within and outside of the school. The complexity of factors associated with the underperformance of Grade 12 students in this school calls for a multi-dimensional intervention from government and stakeholders. One important intervention should be the placement of over-aged students and grade-repeaters in suitable educational institutions for the benefit of other students.

Keywords: Inner context, outer context, over-aged students, vicious circle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1248
16 Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients

Authors: Mbainaibeye Jérôme, Noureddine Ellouze

Abstract:

Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.

Keywords: Image compression, wavelet transform, sign coding, magnitude coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
15 Simulation Data Management Approach for Developing Adaptronic Systems – The W-Model Methodology

Authors: Roland S. Nattermann, Reiner Anderl

Abstract:

Existing proceeding-models for the development of mechatronic systems provide a largely parallel action in the detailed development. This parallel approach is to take place also largely independent of one another in the various disciplines involved. An approach for a new proceeding-model provides a further development of existing models to use for the development of Adaptronic Systems. This approach is based on an intermediate integration and an abstract modeling of the adaptronic system. Based on this system-model a simulation of the global system behavior, due to external and internal factors or Forces is developed. For the intermediate integration a special data management system is used. According to the presented approach this data management system has a number of functions that are not part of the "normal" PDM functionality. Therefore a concept for a new data management system for the development of Adaptive system is presented in this paper. This concept divides the functions into six layers. In the first layer a system model is created, which divides the adaptronic system based on its components and the various technical disciplines. Moreover, the parameters and properties of the system are modeled and linked together with the requirements and the system model. The modeled parameters and properties result in a network which is analyzed in the second layer. From this analysis necessary adjustments to individual components for specific manipulation of the system behavior can be determined. The third layer contains an automatic abstract simulation of the system behavior. This simulation is a precursor for network analysis and serves as a filter. By the network analysis and simulation changes to system components are examined and necessary adjustments to other components are calculated. The other layers of the concept treat the automatic calculation of system reliability, the "normal" PDM-functionality and the integration of discipline-specific data into the system model. A prototypical implementation of an appropriate data management with the addition of an automatic system development is being implemented using the data management system ENOVIA SmarTeam V5 and the simulation system MATLAB.

Keywords: Adaptronic, Data-Management, LOEWE-CentreAdRIA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367
14 Smart Sustainable Cities: An Integrated Planning Approach towards Sustainable Urban Energy Systems, India

Authors: Adinarayanane Ramamurthy, Monsingh D. Devadas

Abstract:

Cities denote instantaneously a challenge and an opportunity for climate change policy. Cities are the place where most energy services are needed because urbanization is closely linked to high population densities and concentration of economic activities and production (Urban energy demand). Consequently, it is critical to explain about the role of cities within the world-s energy systems and its correlation with the climate change issue. With more than half of the world-s population already living in urban areas, and that percentage expected to rise to 75 per cent by 2050, it is clear that the path to sustainable development must pass through cities. Cities expanding in size and population pose increased challenges to the environment, of which energy is part as a natural resource, and to the quality of life. Nowadays, most cities have already understood the importance of sustainability, both at their local scale as in terms of their contribution to sustainability at higher geographical scales. It requires the perception of a city as a complex and dynamic ecosystem, an open system, or cluster of systems, where the energy as well as the other natural resources is transformed to satisfy the needs of the different urban activities. In fact, buildings and transportation generally represent most of cities direct energy demand, i.e., between 60 per cent and 80 per cent of the overall consumption. Buildings, both residential and services are usually influenced by the local physical and social conditions. In terms of transport, the energy demand is also strongly linked with the specific characteristics of a city (urban mobility).The concept of a “smart city" builds on statistics as seven key axes of a city-s success in moving towards common platform (brain nerve)of sustainable urban energy systems. With the aforesaid knowledge, the authors have suggested a frame work to role of cities, as energy actors for smart city management. The authors have discusses the potential elements needed for energy in smart cities and also identified potential energy actions and relevant barriers. Furthermore, three levels of city smartness in cities actions to overcome market /institutional failures with a local approach are distinguished. The authors have made an attempt to conceive and implement concepts of city smartness by adopting the city or local government as nerve center through an integrated planning approach. Finally, concluding with recommendations for the organization of the Smart Sustainable Cities for positive changes of urban India.

Keywords: Urbanization, Urban Energy Demand, Sustainable Urban Energy Systems, Integrated Planning Approach, Smart Sustainable City.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2964
13 Use of Cellulosic Fibres in Double Layer Porous Asphalt

Authors: Márcia Afonso, Marisa Dinis-Almeida, Cristina Fael

Abstract:

Climate change, namely precipitation patterns alteration, has led to extreme conditions such as floods and droughts. In turn, excessive construction has led to the waterproofing of the soil, increasing the surface runoff and decreasing the groundwater recharge capacity. The permeable pavements used in areas with low traffic lead to a decrease in the probability of floods peaks occurrence and the sediments reduction and pollutants transport, ensuring rainwater quality improvement. This study aims to evaluate the porous asphalt performance, developed in the laboratory, with addition of cellulosic fibres. One of the main objectives of cellulosic fibres use is to stop binder drainage, preventing its loss during storage and transport. Comparing to the conventional porous asphalt the cellulosic fibres addition improved the porous asphalt performance. The cellulosic fibres allowed the bitumen content increase, enabling retention and better aggregates coating and, consequently, a greater mixture durability. With this solution, it is intended to develop better practices of resilience and adaptation to the extreme climate changes and respond to the sustainability current demands, through the eco-friendly materials use. The mix design was performed for different size aggregates (with fine aggregates – PA1 and with coarse aggregates – PA2). The percentage influence of the fibres to be used was studied. It was observed that overall, the binder drainage decreases as the cellulose fibres percentage increases. It was found that the PA2 mixture obtained most binder drainage relative to PA1 mixture, irrespective of the fibres percentage used. Subsequently, the performance was evaluated through laboratory tests of indirect tensile stiffness modulus, water sensitivity, permeability and permanent deformation. The stiffness modulus for the two mixtures groups (with and without cellulosic fibres) presented very similar values between them. For the water sensitivity test it was observed that porous asphalt containing more fine aggregates are more susceptible to the water presence than mixtures with coarse aggregates. The porous asphalt with coarse aggregates have more air voids which allow water to pass easily leading to ITSR higher values. In the permeability test was observed that asphalt porous without cellulosic fibres presented had lower permeability than asphalt porous with cellulosic fibres. The resistance to permanent deformation results indicates better behaviour of porous asphalt with cellulosic fibres, verifying a bigger rut depth in porous asphalt without cellulosic fibres. In this study, it was observed that porous asphalt with bitumen higher percentages improve the performance to permanent deformation. This fact was only possible due to the bitumen retention by the cellulosic fibres.

Keywords: Binder drainage, cellulosic fibres, permanent deformation, porous asphalt.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743
12 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795
11 Simulation of Concrete Wall Subjected to Airblast by Developing an Elastoplastic Spring Model in Modelica Modelling Language

Authors: Leo Laine, Morgan Johansson

Abstract:

To meet the civilizations future needs for safe living and low environmental footprint, the engineers designing the complex systems of tomorrow will need efficient ways to model and optimize these systems for their intended purpose. For example, a civil defence shelter and its subsystem components needs to withstand, e.g. airblast and ground shock from decided design level explosion which detonates with a certain distance from the structure. In addition, the complex civil defence shelter needs to have functioning air filter systems to protect from toxic gases and provide clean air, clean water, heat, and electricity needs to also be available through shock and vibration safe fixtures and connections. Similar complex building systems can be found in any concentrated living or office area. In this paper, the authors use a multidomain modelling language called Modelica to model a concrete wall as a single degree of freedom (SDOF) system with elastoplastic properties with the implemented option of plastic hardening. The elastoplastic model was developed and implemented in the open source tool OpenModelica. The simulation model was tested on the case with a transient equivalent reflected pressure time history representing an airblast from 100 kg TNT detonating 15 meters from the wall. The concrete wall is approximately regarded as a concrete strip of 1.0 m width. This load represents a realistic threat on any building in a city like area. The OpenModelica model results were compared with an Excel implementation of a SDOF model with an elastic-plastic spring using simple fixed timestep central difference solver. The structural displacement results agreed very well with each other when it comes to plastic displacement magnitude, elastic oscillation displacement, and response times.

Keywords: Airblast from explosives, elastoplastic spring model, Modelica modelling language, SDOF, structural response of concrete structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907
10 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: Algorithmic trading, automated investment system, DAX Deutscher Aktienindex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 696
9 The Application of Dynamic Network Process to Environment Planning Support Systems

Authors: Wann-Ming Wey

Abstract:

In recent years, in addition to face the external threats such as energy shortages and climate change, traffic congestion and environmental pollution have become anxious problems for many cities. Considering private automobile-oriented urban development had produced many negative environmental and social impacts, the transit-oriented development (TOD) has been considered as a sustainable urban model. TOD encourages public transport combined with friendly walking and cycling environment designs, however, non-motorized modes help improving human health, energy saving, and reducing carbon emissions. Due to environmental changes often affect the planners’ decision-making; this research applies dynamic network process (DNP) which includes the time dependent concept to promoting friendly walking and cycling environmental designs as an advanced planning support system for environment improvements.

This research aims to discuss what kinds of design strategies can improve a friendly walking and cycling environment under TOD. First of all, we collate and analyze environment designing factors by reviewing the relevant literatures as well as divide into three aspects of “safety”, “convenience”, and “amenity” from fifteen environment designing factors. Furthermore, we utilize fuzzy Delphi Technique (FDT) expert questionnaire to filter out the more important designing criteria for the study case. Finally, we utilized DNP expert questionnaire to obtain the weights changes at different time points for each design criterion. Based on the changing trends of each criterion weight, we are able to develop appropriate designing strategies as the reference for planners to allocate resources in a dynamic environment. In order to illustrate the approach we propose in this research, Taipei city as one example has been used as an empirical study, and the results are in depth analyzed to explain the application of our proposed approach.

Keywords: Environment Planning Support Systems, Walking and Cycling, Transit-oriented Development (TOD), Dynamic Network Process (DNP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
8 Spatial Structure of First-Order Voronoi for the Future of Roundabout Cairo since 1867

Authors: Ali Essam El Shazly

Abstract:

The Haussmannization plan of Cairo in 1867 formed a regular network of roundabout spaces, though deteriorated at present. The method of identifying the spatial structure of roundabout Cairo for conservation matches the voronoi diagram with the space syntax through their geometrical property of spatial convexity. In this initiative, the primary convex hull of first-order voronoi adopts the integral and control measurements of space syntax on Cairo’s roundabout generators. The functional essence of royal palaces optimizes the roundabout structure in terms of spatial measurements and the symbolic voronoi projection of 'Tahrir Roundabout' over the Giza Nile and Pyramids. Some roundabouts of major public and commercial landmarks surround the pole of 'Ezbekia Garden' with a higher control than integral measurements, which filter the new spatial structure from the adjacent traditional town. Nevertheless, the least integral and control measures correspond to the voronoi contents of pollutant workshops and the plateau of old Cairo Citadel with the visual compensation of new royal landmarks on top. Meanwhile, the extended suburbs of infinite voronoi polygons arrange high control generators of chateaux housing in 'garden city' environs. The point pattern of roundabouts determines the geometrical characteristics of voronoi polygons. The measured lengths of voronoi edges alternate between the zoned short range at the new poles of Cairo and the distributed structure of longer range. Nevertheless, the shortest range of generator-vertex geometry concentrates at 'Ezbekia Garden' where the crossways of vast Cairo intersect, which maximizes the variety of choice at different spatial resolutions. However, the symbolic 'Hippodrome' which is the largest public landmark forms exclusive geometrical measurements, while structuring a most integrative roundabout to parallel the royal syntax. Overview of the symbolic convex hull of voronoi with space syntax interconnects Parisian Cairo with the spatial chronology of scattered monuments to conceive one universal Cairo structure. Accordingly, the approached methodology of 'voronoi-syntax' prospects the future conservation of roundabout Cairo at the inferred city-level concept.

Keywords: Roundabout Cairo, first-order Voronoi, space syntax, spatial structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688
7 Transformation of Aluminum Unstable Oxyhydroxides in Ultrafine α-Al2O3 in Presence of Various Seeds

Authors: T. Kuchukhidze, N. Jalagonia, Z. Phachulia, R. Chedia

Abstract:

Ceramic obtained on the base of aluminum oxide has wide application range, because it has unique properties, for example, wear-resistance, dielectric characteristics, and exploitation ability at high temperatures and in corrosive atmosphere. Low temperature synthesis of α-Al2O3 is energo-economical process and it is topical for developing technologies of corundum ceramics fabrication. In the present work possibilities of low temperature transformation of oxyhydroxides in α-Al2O3, during the presence of small amount of rare–earth elements compounds (also Th, Re), have been discussed. Aluminum unstable oxyhydroxides have been obtained by hydrolysis of aluminium isopropoxide, nitrates, sulphate, and chloride in alkaline environment at 80-90ºC temperatures. β-Al(OH)3 has been received from aluminum powder by ultrasonic development. Drying of oxyhydroxide sol has been conducted with presence of various types seeds, which amount reaches 0,1-0,2% (mas). Neodymium, holmium, thorium, lanthanum, cerium, gadolinium, disprosium nitrates and rhenium carbonyls have been used as seeds and they have been added to the sol specimens in amount of 0.1-0.2% (mas) calculated on metals. Annealing of obtained gels is carried out at 70– 1100ºC for 2 hrs. The same specimen transforms in α-Al2O3 at 1100ºC. At this temperature in case of presence of lanthanum and gadolinium transformation takes place by 70-85%. In case of presence of thorium stabilization of γ-and θ-phases takes place. It is established, that thorium causes inhibition of α-phase generation at 1100ºC, and at the time when in all other doped specimens α-phase is generated at lower temperatures (1000-1050ºC). Synthesis of various type compounds and simultaneous consolidation has developed in the furnace of OXY-GON. Composite materials containing oxide and non-oxide components close to theoretical data have been obtained in this furnace respectively. During the work the following devices have been used: X-ray diffractometer DRON-3M (Cu-Kα, Ni filter, 2º/min), High temperature vacuum furnace OXY-GON, electronic scanning microscopes Nikon ECLIPSE LV 150, NMM-800TRF, planetary mill Pulverisette 7 premium line, SHIMADZU Dynamic Ultra Micro Hardness Tester, DUH-211S, Analysette 12 Dyna sizer.

Keywords: α-Alumina, combustion, consolidation, phase transformation, seeding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4084
6 Roughness and Hardness of 60/40 Cu-Zn Alloy

Authors: Pavana Manvikar, G K Purohit

Abstract:

The functional performance of machined components, often, depends on surface topography, hardness, nature of stress and strain induced on the surface, etc. Invariably, surfaces of metallic components obtained by turning, milling, etc., consist of irregularities such as machining marks are responsible for the above. Surface finishing/coating processes used to produce improved surface quality/textures are classified as chip-removal and chip-less processes. Burnishing is chip-less cold working process carried out to improve surface finish, hardness and resistance to fatigue and corrosion; not obtainable by other surface coating and surface treatment processes. It is a very simple, but effective method which improves surface characteristics and is reported to introduce compressive stresses.

Of late, considerable attention is paid to post-machining, finishing operations, such as burnishing. During burnishing the micro-irregularities start to deform plastically, initially the crests are gradually flattened and zones of reduced deformation are formed. When all the crests are deformed, the valleys between the micro-irregularities start moving in the direction of the newly formed surface. The grain structure is then condensed, producing a smoother and harder surface with superior load-carrying and wear-resistant capabilities.

Burnishing can be performed on a lathe with a highly polished ball or roller type tool which is traversed under force over a rotating/stationary work piece. Often, several passes are used to obtain the work piece surface with the desired finish and hardness.

This paper presents the findings of an experimental investigation on the effect of ball burnishing parameters such as, burnishing speed, feed, force and number of passes; on surface roughness (Ra) and micro-hardness (Hv) of a 60/40 copper/zinc alloy, using a 2-level fractional factorial design of experiments (DoE). Mathematical models were developed to predict surface roughness and hardness generated by burnishing in terms of the above process parameters. A ball-type tool, designed and constructed from a high chrome steel material (HRC=63 and Ra=0.012 µm), was used for burnishing of fine-turned cylindrical bars (0.68-0.78µm and 145Hv). They are given by,

 

Ra= 0.305-0.005X1 - 0.0175X2 + 0.0525X4 + 0.0125X1X4 -0.02X2X4 - 0.0375X3X4

 

Hv=160.625 -2.37 5X1 + 5.125X2 + 1.875X3 + 4.375X4 - 1.625X1X4 + 4.375X2X4 - 2.375X3X4

 

High surface microhardness (175HV) was obtained at 400rpm, 2passes, 0.05mm/rev and 15kgf., and high surface finish (0.20µm) was achieved at 30kgf, 0.1mm/rev, 112rpm and single pass. In other words, surface finish improved by 350% and microhardness improved by 21% compared to as machined conditions.

Keywords: Ball burnishing, surface roughness, micro-hardness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2532
5 Valorization of Lignocellulosic Wastes – Evaluation of Its Toxicity When Used in Adsorption Systems

Authors: Isabel Brás, Artur Figueirinha, Bruno Esteves, Luísa P. Cruz-Lopes

Abstract:

The agriculture lignocellulosic by-products are receiving increased attention, namely in the search for filter materials that retain contaminants from water. These by-products, specifically almond and hazelnut shells are abundant in Portugal once almond and hazelnuts production is a local important activity. Hazelnut and almond shells have as main constituents lignin, cellulose and hemicelluloses, water soluble extractives and tannins. Along the adsorption of heavy metals from contaminated waters, water soluble compounds can leach from shells and have a negative impact in the environment. Usually, the chemical characterization of treated water by itself may not show environmental impact caused by the discharges when parameters obey to legal quality standards for water. Only biological systems can detect the toxic effects of the water constituents. Therefore, the evaluation of toxicity by biological tests is very important when deciding the suitability for safe water discharge or for irrigation applications.

The main purpose of the present work was to assess the potential impacts of waters after been treated for heavy metal removal by hazelnut and almond shells adsorption systems, with short term acute toxicity tests.

To conduct the study, water at pH 6 with 25 mg.L-1 of lead, was treated with 10 g of shell per litre of wastewater, for 24 hours. This procedure was followed for each bark. Afterwards the water was collected for toxicological assays; namely bacterial resistance, seed germination, Lemna minor L. test and plant grow. The effect in isolated bacteria strains was determined by disc diffusion method and the germination index of seed was evaluated using lettuce, with temperature and humidity germination control for 7 days. For aquatic higher organism, Lemnas were used with 4 days contact time with shell solutions, in controlled light and temperature. For terrestrial higher plants, biomass production was evaluated after 14 days of tomato germination had occurred in soil, with controlled humidity, light and temperature.

Toxicity tests of water treated with shells revealed in some extent effects in the tested organisms, with the test assays showing a close behaviour as the control, leading to the conclusion that its further utilization may not be considered to create a serious risk to the environment.

Keywords: Acute toxicity tests, adsorption, lignocellulosic wastes, risk assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2244
4 Bridging the Gap: Living Machine in Educational Nature Preserve Center

Authors: Zakeia Benmoussa

Abstract:

Pressure on freshwater systems comes from removing too much water to grow crops; contamination from economic activities, land use practices, and human waste. The paper will be focusing on how water management can influence the design, implementation, and impacts of the ecological principles of biomimicry as sustainable methods in recycling wastewater. At Texas State, United States of America, in particular the lower area of the Trinity River refuge, there is a true example of the diversity to be found in that area, whether when exploring the lands or the waterways. However, as the Trinity River supplies water to the state’s residents, the lower part of the river at Liberty County presents several problem of wastewater discharge in the river. Therefore, conservation efforts are particularly important in the Trinity River basin. Clearly, alternative ways must be considered in order to conserve water to meet future demands. As a result, there should be another system provided rather than the conventional water treatment. Mimicking ecosystem's technologies out of context is not enough, but if we incorporate plants into building architecture, in addition to their beauty, they can filter waste, absorb excess water, and purify air. By providing an architectural proposal center, a living system can be explored through several methods that influence natural resources on the micro-scale in order to impact sustainability on the macro-scale. The center consists of an ecological program of Plant and Water Biomimicry study which becomes a living organism that purifies the river water in a natural way through architecture. Consequently, a rich beautiful nature could be used as an educational destination, observation and adventure, as well as providing unpolluted fresh water to the major cities of Texas. As a result, these facts raise a couple of questions: Why is conservation so rarely practiced by those who must extract a living from the land? Are we sufficiently enlightened to realize that we must now challenge that dogma? Do architects respond to the environment and reflect on it in the correct way through their public projects? The method adopted in this paper consists of general research into careful study of the system of the living machine, in how to integrate it at architectural level, and finally, the consolidation of the all the conclusions formed into design proposal. To summarise, this paper attempts to provide a sustainable alternative perspective in bridging physical and mental interaction with biodiversity to enhance nature by using architecture.

Keywords: Biodiversity, design with nature, sustainable architecture, waste water treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1984
3 Navigation and Guidance System Architectures for Small Unmanned Aircraft Applications

Authors: Roberto Sabatini, Celia Bartel, Anish Kaharkar, Tesheen Shaid, Subramanian Ramasamy

Abstract:

Two multisensor system architectures for navigation and guidance of small Unmanned Aircraft (UA) are presented and compared. The main objective of our research is to design a compact, light and relatively inexpensive system capable of providing the required navigation performance in all phases of flight of small UA, with a special focus on precision approach and landing, where Vision Based Navigation (VBN) techniques can be fully exploited in a multisensor integrated architecture. Various existing techniques for VBN are compared and the Appearance-Based Navigation (ABN) approach is selected for implementation. Feature extraction and optical flow techniques are employed to estimate flight parameters such as roll angle, pitch angle, deviation from the runway centreline and body rates. Additionally, we address the possible synergies of VBN, Global Navigation Satellite System (GNSS) and MEMS-IMU (Micro-Electromechanical System Inertial Measurement Unit) sensors, and the use of Aircraft Dynamics Model (ADM) to provide additional information suitable to compensate for the shortcomings of VBN and MEMS-IMU sensors in high-dynamics attitude determination tasks. An Extended Kalman Filter (EKF) is developed to fuse the information provided by the different sensors and to provide estimates of position, velocity and attitude of the UA platform in real-time. The key mathematical models describing the two architectures i.e., VBN-IMU-GNSS (VIG) system and VIGADM (VIGA) system are introduced. The first architecture uses VBN and GNSS to augment the MEMS-IMU. The second mode also includes the ADM to provide augmentation of the attitude channel. Simulation of these two modes is carried out and the performances of the two schemes are compared in a small UA integration scheme (i.e., AEROSONDE UA platform) exploring a representative cross-section of this UA operational flight envelope, including high dynamics manoeuvres and CAT-I to CAT-III precision approach tasks. Simulation of the first system architecture (i.e., VIG system) shows that the integrated system can reach position, velocity and attitude accuracies compatible with the Required Navigation Performance (RNP) requirements. Simulation of the VIGA system also shows promising results since the achieved attitude accuracy is higher using the VBN-IMU-ADM than using VBN-IMU only. A comparison of VIG and VIGA system is also performed and it shows that the position and attitude accuracy of the proposed VIG and VIGA systems are both compatible with the RNP specified in the various UA flight phases, including precision approach down to CAT-II.

Keywords: Global Navigation Satellite System (GNSS), Lowcost Navigation Sensors, MEMS Inertial Measurement Unit (IMU), Unmanned Aerial Vehicle, Vision Based Navigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3215