Search results for: Base Input Reconstruction
212 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique
Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran
Abstract:
Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.Keywords: Channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736211 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based On Local Color Histograms
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.
Keywords: CBIR, Color Global Histogram, Color Local Histogram, Weak Segmentation, Euclidean Distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734210 Additive Manufacturing with Ceramic Filler Concerning Filament Creation and Strength
Authors: Wolfram Irsa, Lorenz Boruch
Abstract:
Innovative solutions in additive manufacturing applying material extrusion for functional parts necessitates innovative filaments with persistent quality. Uniform homogeneity and consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that are rarely at disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories which investigate on complex material topics and technology science to leverage on the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillers offered from the market. Therefore, we present a prototypal laboratory methodology scalable to tailored primal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. A desktop single-screw extruder serves as core device for the experiments. Custom-made filament encapsulates the ceramic fillers and serves with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. It is 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms steady dispersion of the ceramic particles in the composite filament. This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it delivers consistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types, beyond and above of ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses creating their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.
Keywords: Additive manufacturing, ceramic composites, complex filament, industrial application.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 414209 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark
Authors: B. Elshafei, X. Mao
Abstract:
The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.
Keywords: Data fusion, Gaussian process regression, signal denoise, temporal extrapolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 503208 Evaluating Emission Reduction Due to a Proposed Light Rail Service: A Micro-Level Analysis
Authors: Saeid Eshghi, Neeraj Saxena, Abdulmajeed Alsultan
Abstract:
Carbon dioxide (CO2) alongside other gas emissions in the atmosphere cause a greenhouse effect, resulting in an increase of the average temperature of the planet. Transportation vehicles are among the main contributors of CO2 emission. Stationary vehicles with initiated motors produce more emissions than mobile ones. Intersections with traffic lights that force the vehicles to become stationary for a period of time produce more CO2 pollution than other parts of the road. This paper focuses on analyzing the CO2 produced by the traffic flow at Anzac Parade Road - Barker Street intersection in Sydney, Australia, before and after the implementation of Light rail transport (LRT). The data are gathered during the construction phase of the LRT by collecting the number of vehicles on each path of the intersection for 15 minutes during the evening rush hour of 1 week (6-7 pm, July 04-31, 2018) and then multiplied by 4 to calculate the flow of vehicles in 1 hour. For analyzing the data, the microscopic simulation software “VISSIM” has been used. Through the analysis, the traffic flow was processed in three stages: before and after implementation of light rail train, and one during the construction phase. Finally, the traffic results were input into another software called “EnViVer”, to calculate the amount of CO2 during 1 h. The results showed that after the implementation of the light rail, CO2 will drop by a minimum of 13%. This finding provides an evidence that light rail is a sustainable mode of transport.Keywords: Carbon dioxide, emission modeling, light rail, microscopic model, traffic flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 953207 Self-Help Adaptation to Flooding in Low-Income Settlements in Chiang Mai, Thailand
Authors: Nachawit Tikul
Abstract:
This study aimed to determine low-income housing adaptations for flooding, which causes living problems and housing damage, and the results from improvement. Three low-income settlements in Chiang Mai which experienced different flood types, i.e. flash floods in Samukeepattana, drainage floods in Bansanku, and river floods in Kampangam, were chosen for the study. Almost all of the residents improved their houses to protect the property from flood damage by changing building materials to flood damage resistant materials for walls, floors, and other parts of the structure that were below the base of annual flood elevation. They could only build some parts of their own homes, so hiring skilled workers or contractors was still important. Building materials which have no need for any special tools and are easy to access and use for construction, as well as low cost, are selected for construction. The residents in the three slums faced living problems for only a short time and were able to cope with them. This may be due to the location of the three slums near the city where assistance is readily available. But the housing and the existence in the slums can endure only the regular floods and residence still have problems in unusual floods, which have been experienced 1-2 times during the past 10 years. The residents accept the need for evacuations and prepare for them. When faced with extreme floods, residence have evacuated to the nearest safe place such as schools and public building, and come back to repair the houses after the flood. These are the distinguishing characteristics of low-income living which can withstand serious situations due to the simple lifestyle. Therefore, preparation of living areas for use during severe floods and encouraging production of affordable flood resistant materials should be areas of concern when formulating disaster assistance policies for low income people.
Keywords: Flooding, low-income settlement, housing, adaptation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1117206 Study and Analysis of Permeable Articulated Concrete Blocks Pavement: With Reference to Indian Context
Authors: Shrikant Charhate, Gayatri Deshpande
Abstract:
Permeable pavements have significant benefits like managing runoff, infiltration, and carrying traffic over conventional pavements in terms of sustainability and environmental impact. Some of the countries are using this technique, especially at locations where durability and other parameters are of importance in nature; however, sparse work has been done on this concept. In India, this is yet to be adopted. In this work, the progress in the characterization and development of Permeable Articulated Concrete Blocks (PACB) pavement design is described and discussed with reference to Indian conditions. The experimentation and in-depth analysis was carried out considering conditions like soil erosion, water logging, and dust which are significant challenges caused due to impermeability of pavement. Concrete blocks with size 16.5’’x 6.5’’x 7’’ consisting of arch shape (4’’) at beneath and ½” PVC holes for articulation were casted. These blocks were tested for flexural strength. The articulation process was done with nylon ropes forming series of concrete block system. The total spacing between the blocks was kept about 8 to 10% of total area. The hydraulic testing was carried out by placing the articulated blocks with the combination of layers of soil, geotextile, clean angular aggregate. This was done to see the percentage of seepage through the entire system. The experimental results showed that with the shape of concrete block the flexural strength achieved was beyond the permissible limit. Such blocks with the combination could be very useful innovation in Indian conditions and useful at various locations compared to the traditional blocks as an alternative for long term sustainability.
Keywords: Connections, geotextile, permeable ACB, pavements, stone base.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 893205 Design and Analysis of a Piezoelectric Linear Motor Based on Rigid Clamping
Authors: Chao Yi, Cunyue Lu, Lingwei Quan
Abstract:
Piezoelectric linear motors have the characteristics of great electromagnetic compatibility, high positioning accuracy, compact structure and no deceleration mechanism, which make it promising to applicate in micro-miniature precision drive systems. However, most piezoelectric motors are employed by flexible clamping, which has insufficient rigidity and is difficult to use in rapid positioning. Another problem is that this clamping method seriously affects the vibration efficiency of the vibrating unit. In order to solve these problems, this paper proposes a piezoelectric stack linear motor based on double-end rigid clamping. First, a piezoelectric linear motor with a length of only 35.5 mm is designed. This motor is mainly composed of a motor stator, a driving foot, a ceramic friction strip, a linear guide, a pre-tightening mechanism and a base. This structure is much simpler and smaller than most similar motors, and it is easy to assemble as well as to realize precise control. In addition, the properties of piezoelectric stack are reviewed and in order to obtain the elliptic motion trajectory of the driving head, a driving scheme of the longitudinal-shear composite stack is innovatively proposed. Finally, impedance analysis and speed performance testing were performed on the piezoelectric linear motor prototype. The motor can measure speed up to 25.5 mm/s under the excitation of signal voltage of 120 V and frequency of 390 Hz. The result shows that the proposed piezoelectric stacked linear motor obtains great performance. It can run smoothly in a large speed range, which is suitable for various precision control in medical images, aerospace, precision machinery and many other fields.
Keywords: Elliptical trajectory, linear motor, piezoelectric stack, rigid clamping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 725204 Detection of Transgenes in Cotton (Gossypium hirsutum L.) by Using Biotechnology/Molecular Biological Techniques
Authors: Ahmad Ali Shahid, Muhammad Shakil Shaukat, Kamran Shehzad Bajwa, Abdul Qayyum Rao, Tayyab Husnain
Abstract:
Agriculture is the backbone of economy of Pakistan and cotton is the major agricultural export and supreme source of raw fiber for our textile industry. To combat severe problems of insect and weed, combination of three genes namely Cry1Ac, Cry2A and EPSPS genes was transferred in locally cultivated cotton variety MNH-786 with the use of Agrobacterium mediated genetic transformation. The present study focused on the molecular screening of transgenic cotton plants at T3 generation in order to confirm integration and expression of all three genes (Cry1Ac, Cry2A and EPSP synthase) into the cotton genome. Initially, glyphosate spray assay was used for screening of transgenic cotton plants containing EPSP synthase gene at T3 generation. Transgenic cotton plants which were healthy and showed no damage on leaves were selected after 07 days of spray. For molecular analysis of transgenic cotton plants in the laboratory, the genomic DNA of these transgenic cotton plants were isolated and subjected to amplification of the three genes. Thus, seventeen out of twenty (Cry1Ac gene), ten out of twenty (Cry2A gene) and all twenty (EPSP synthase gene) were produced positive amplification. On the base of PCR amplification, ten transgenic plant samples were subjected to protein expression analysis through ELISA. The results showed that eight out of ten plants were actively expressing the three transgenes. Real-time PCR was also done to quantify the mRNA expression levels of Cry1Ac and EPSP synthase gene. Finally, eight plants were confirmed for the presence and active expression of all three genes at T3 generation.
Keywords: Agriculture, Cotton, Transformation, Cry Genes, ELISA and PCR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3141203 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals
Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty
Abstract:
A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.
Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 208202 A Study on the Performance Characteristics of Variable Valve for Reverse Continuous Damper
Authors: Se Kyung Oh, Young Hwan Yoon, Ary Bachtiar Krishna
Abstract:
Nowadays, a passenger car suspension must has high performance criteria with light weight, low cost, and low energy consumption. Pilot controlled proportional valve is designed and analyzed to get small pressure change rate after blow-off, and to get a fast response of the damper, a reverse damping mechanism is adapted. The reverse continuous variable damper is designed as a HS-SH damper which offers good body control with reduced transferred input force from the tire, compared with any other type of suspension system. The damper structure is designed, so that rebound and compression damping forces can be tuned independently, of which the variable valve is placed externally. The rate of pressure change with respect to the flow rate after blow-off becomes smooth when the fixed orifice size increases, which means that the blow-off slope is controllable using the fixed orifice size. Damping forces are measured with the change of the solenoid current at the different piston velocities to confirm the maximum hysteresis of 20 N, linearity, and variance of damping force. The damping force variance is wide and continuous, and is controlled by the spool opening, of which scheme is usually adapted in proportional valves. The reverse continuous variable damper developed in this study is expected to be utilized in the semi-active suspension systems in passenger cars after its performance and simplicity of the design is confirmed through a real car test.Keywords: Blow-off, damping force, pilot controlledproportional valve, reverse continuous damper.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2444201 Use of Chlorophyll Meters to Assess In-Season Wheat Nitrogen Fertilizer Requirements in the Southern San Joaquin Valley
Authors: Brian H. Marsh
Abstract:
Nitrogen fertilizer is the most used and often the most mismanaged nutrient input. Nitrogen management has tremendous implications on crop productivity, quality and environmental stewardship. Sufficient nitrogen is needed to optimum yield and quality. Soil and in-season plant tissue testing for nitrogen status are a time consuming and expensive process. Real time sensing of plant nitrogen status can be a useful tool in managing nitrogen inputs. The objectives of this project were to assess the reliability of remotely sensed non-destructive plant nitrogen measurements compared to wet chemistry data from sampled plant tissue, develop in-season nitrogen recommendations based on remotely sensed data for improved nitrogen use efficiency and assess the potential for determining yield and quality from remotely sensed data. Very good correlations were observed between early-season remotely sensed crop nitrogen status and plant nitrogen concentrations and subsequent in-season fertilizer recommendations. The transmittance/absorbance type meters gave the most accurate readings. Early in-season fertilizer recommendation would be to apply 40 kg nitrogen per hectare plus 15 kg nitrogen per hectare for each unit difference measured with the SPAD meter between the crop and reference area or 25 kg plus 13 kg per hectare for each unit difference measured with the CCM 200. Once the crop was sufficiently fertilized meter readings became inconclusive and were of no benefit for determining nitrogen status, silage yield and quality and grain yield and protein.
Keywords: Wheat, nitrogen fertilization, chlorophyll meter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2203200 Evaluation of Ensemble Classifiers for Intrusion Detection
Authors: M. Govindarajan
Abstract:
One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.Keywords: Data mining, ensemble, radial basis function, support vector machine, accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705199 A Hybridized Competency-Based Teacher Candidate Selection System
Authors: R. Ramli, M. I. Ghazali, H. Ibrahim, M. M. Kasim, F. M. Kamal, S.Vikneswari
Abstract:
Teachers form the backbone of any educational system, hence selecting qualified candidates is very crucial. In Malaysia, the decision making in the selection process involves a few stages: Initial filtering through academic achievement, taking entry examination and going through an interview session. The last stage is the most challenging since it highly depends on human judgment. Therefore, this study sought to identify the selection criteria for teacher candidates that form the basis for an efficient multi-criteria teacher-candidate selection model for that last stage. The relevant criteria were determined from the literature and also based on expert input that is those who were involved in interviewing teacher candidates from a public university offering the formal training program. There are three main competency criteria that were identified which are content of knowledge, communication skills and personality. Further, each main criterion was divided into a few subcriteria. The Analytical Hierarchy Process (AHP) technique was employed to allocate weights for the criteria and later, integrated a Simple Weighted Average (SWA) scoring approach to develop the selection model. Subsequently, a web-based Decision Support System was developed to assist in the process of selecting the qualified teacher candidates. The Teacher-Candidate Selection (TeCaS) system is able to assist the panel of interviewers during the selection process which involves a large amount of complex qualitative judgments.
Keywords: Analytic Hierarchy Process, Simple Weighted Average, Decision Support System, Multi-criteria decision making problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2190198 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning
Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park
Abstract:
The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.
Keywords: Structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 423197 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies
Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey
Abstract:
Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. The world wide observed changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although the effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.
Keywords: Climate Change, Downscaling, GCM, RCM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3378196 Study of Integrated Vehicle Image System Including LDW, FCW, and AFS
Authors: Yi-Feng Su, Chia-Tseng Chen, Hsueh-Lung Liao
Abstract:
The objective of this research is to develop an advanced driver assistance system characterized with the functions of lane departure warning (LDW), forward collision warning (FCW) and adaptive front-lighting system (AFS). The system is mainly configured a CCD/CMOS camera to acquire the images of roadway ahead in association with the analysis made by an image-processing unit concerning the lane ahead and the preceding vehicles. The input image captured by a camera is used to recognize the lane and the preceding vehicle positions by image detection and DROI (Dynamic Range of Interesting) algorithms. Therefore, the system is able to issue real-time auditory and visual outputs of warning when a driver is departing the lane or driving too close to approach the preceding vehicle unwittingly so that the danger could be prevented from occurring. During the nighttime, in addition to the foregoing warning functions, the system is able to control the bending light of headlamp to provide an immediate light illumination when making a turn at a curved lane and adjust the level automatically to reduce the lighting interference against the oncoming vehicles driving in the opposite direction by the curvature of lane and the vanishing point estimations. The experimental results show that the integrated vehicle image system is robust to most environments such as the lane detection and preceding vehicle detection average accuracy performances are both above 90 %.
Keywords: Lane mark detection, lane departure warning (LDW), dynamic range of interesting (DROI), forward collision warning (FCW), adaptive front-lighting system (AFS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2160195 Digital Automatic Gain Control Integrated on WLAN Platform
Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel
Abstract:
In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.Keywords: WLAN, AGC, RSSI, baseband processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3952194 Cooperative Cross Layer Topology for Concurrent Transmission Scheduling Scheme in Broadband Wireless Networks
Authors: Gunasekaran Raja, Ramkumar Jayaraman
Abstract:
In this paper, we consider CCL-N (Cooperative Cross Layer Network) topology based on the cross layer (both centralized and distributed) environment to form network communities. Various performance metrics related to the IEEE 802.16 networks are discussed to design CCL-N Topology. In CCL-N topology, nodes are classified as master nodes (Master Base Station [MBS]) and serving nodes (Relay Station [RS]). Nodes communities are organized based on the networking terminologies. Based on CCL-N Topology, various simulation analyses for both transparent and non-transparent relays are tabulated and throughput efficiency is calculated. Weighted load balancing problem plays a challenging role in IEEE 802.16 network. CoTS (Concurrent Transmission Scheduling) Scheme is formulated in terms of three aspects – transmission mechanism based on identical communities, different communities and identical node communities. CoTS scheme helps in identifying the weighted load balancing problem. Based on the analytical results, modularity value is inversely proportional to that of the error value. The modularity value plays a key role in solving the CoTS problem based on hop count. The transmission mechanism for identical node community has no impact since modularity value is same for all the network groups. In this paper three aspects of communities based on the modularity value which helps in solving the problem of weighted load balancing and CoTS are discussed.
Keywords: Cross layer network topology, concurrent scheduling, modularity value, network communities and weighted load balancing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438193 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.
Keywords: Composite material, crashworthiness, finite element analysis, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1132192 The Solar Wall in the Italian Climates
Authors: F. Stazi, C. Di Perna, C. Filiaci, A. Stazi
Abstract:
Passive systems were born with the purpose of the greatest exploitation of solar energy in cold climates and high altitudes. They spread themselves until the 80-s all over the world without any attention to the specific climate and the summer behavior; this caused the deactivation of the systems due to a series of problems connected to the summer overheating, the complex management and the rising of the dust. Until today the European regulation limits only the winter consumptions without any attention to the summer behavior but, the recent European EN 15251 underlines the relevance of the indoor comfort, and the necessity of the analytic studies validation by monitoring case studies. In the porpose paper we demonstrate that the solar wall is an efficient system both from thermal comfort and energy saving point of view and it is the most suitable for our temperate climates because it can be used as a passive cooling sistem too. In particular the paper present an experimental and numerical analisys carried out on a case study with nine different solar passive systems in Ancona, Italy. We carried out a detailed study of the lodging provided by the solar wall by the monitoring and the evaluation of the indoor conditions. Analyzing the monitored data, on the base of recognized models of comfort (ISO, ASHRAE, Givoni-s BBCC), is emerged that the solar wall has an optimal behavior in the middle seasons. In winter phase this passive system gives more advantages in terms of energy consumptions than the other systems, because it gives greater heat gain and therefore smaller consumptions. In summer, when outside air temperature return in the mean seasonal value, the indoor comfort is optimal thanks to an efficient transversal ventilation activated from the same wall.Keywords: Building envelope, energy saving, passive solarwall, thermal comfort.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656191 Influence of Compactive Efforts on Cement- Bagasse Ash Treatment of Expansive Black Cotton Soil
Authors: Moses, G, Osinubi, K. J.
Abstract:
A laboratory study on the influence of compactive effort on expansive black cotton specimens treated with up to 8% ordinary Portland cement (OPC) admixed with up to 8% bagasse ash (BA) by dry weight of soil and compacted using the energies of the standard Proctor (SP), West African Standard (WAS) or “intermediate” and modified Proctor (MP) were undertaken. The expansive black cotton soil was classified as A-7-6 (16) or CL using the American Association of Highway and Transportation Officials (AASHTO) and Unified Soil Classification System (USCS), respectively. The 7day unconfined compressive strength (UCS) values of the natural soil for SP, WAS and MP compactive efforts are 286, 401 and 515kN/m2 respectively, while peak values of 1019, 1328 and 1420kN/m2 recorded at 8% OPC/ 6% BA, 8% OPC/ 2% BA and 6% OPC/ 4% BA treatments, respectively were less than the UCS value of 1710kN/m2 conventionally used as criterion for adequate cement stabilization. The soaked California bearing ratio (CBR) values of the OPC/BA stabilized soil increased with higher energy level from 2, 4 and 10% for the natural soil to Peak values of 55, 18 and 8% were recorded at 8% OPC/4% BA 8% OPC/2% BA and 8% OPC/4% BA, treatments when SP, WAS and MP compactive effort were used, respectively. The durability of specimens was determined by immersion in water. Soils treatment at 8% OPC/ 4% BA blend gave a value of 50% resistance to loss in strength value which is acceptable because of the harsh test condition of 7 days soaking period specimens were subjected instead of the 4 days soaking period that specified a minimum resistance to loss in strength of 80%. Finally An optimal blend of is 8% OPC/ 4% BA is recommended for treatment of expansive black cotton soil for use as a sub-base material.
Keywords: Bagasse ash, California bearing ratio, Compaction, Durability, Ordinary Portland cement, Unconfined compressive strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3568190 Appraisal of Methods for Identifying, Mapping, and Modelling of Fluvial Erosion in a Mining Environment
Authors: F. F. Howard, I. Yakubu, C. B. Boye, J. S. Y. Kuma
Abstract:
Natural and human activities, such as mining operations, expose the natural soil to adverse environmental conditions, leading to contamination of soil, groundwater, and surface water, which has negative effects on humans, flora, and fauna. Bare or partly exposed soil is most liable to fluvial erosion. This paper enumerates various methods used to identify, map, and model fluvial erosion in a mining environment. Classical, Artificial Intelligence (AI), and GIS methods have been reviewed. One of the many classical methods used to estimate river erosion is the Revised Universal Soil Loss Equation (RUSLE) model. The RUSLE model is easy to use. Its reliance on empirical relationships that may not always be applicable to specific circumstances or locations is a flaw. Other classical models for estimating fluvial erosion are the Soil and Water Assessment Tool (SWAT) and the Universal Soil Loss Equation (USLE). These models offer a more complete understanding of the underlying physical processes and encompass a wider range of situations. Although more difficult to utilise, they depend on the availability and dependability of input data for correctness. AI can help deal with multivariate and complex difficulties and predict soil loss with higher accuracy than traditional methods, and also be used to build unique models for identifying degraded areas. AI techniques have become popular as an alternative predictor for degraded environments. However, this research proposed a hybrid of classical, AI, and GIS methods for efficient and effective modelling of fluvial erosion.
Keywords: Fluvial erosion, classical methods, Artificial Intelligence, Geographic Information System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 192189 Control of Airborne Aromatic Hydrocarbons over TiO2-Carbon Nanotube Composites
Authors: Joon Y. Lee, Seung H. Shin, Ho H. Chun, Wan K. Jo
Abstract:
Poly vinyl acetate (PVA)-based titania (TiO2)–carbon nanotube composite nanofibers (PVA-TCCNs) with various PVA-to-solvent ratios and PVA-based TiO2 composite nanofibers (PVA-TN) were synthesized using an electrospinning process, followed by thermal treatment. The photocatalytic activities of these nanofibers in the degradation of airborne monocyclic aromatics under visible-light irradiation were examined. This study focuses on the application of these photocatalysts to the degradation of the target compounds at sub-part-per-million indoor air concentrations. The characteristics of the photocatalysts were examined using scanning electron microscopy, X-ray diffraction, ultraviolet-visible spectroscopy, and Fourier-transform infrared spectroscopy. For all the target compounds, the PVA-TCCNs showed photocatalytic degradation efficiencies superior to those of the reference PVA-TN. Specifically, the average photocatalytic degradation efficiencies for benzene, toluene, ethyl benzene, and o-xylene (BTEX) obtained using the PVA-TCCNs with a PVA-to-solvent ratio of 0.3 (PVA-TCCN-0.3) were 11%, 59%, 89%, and 92%, respectively, whereas those observed using PVA-TNs were 5%, 9%, 28%, and 32%, respectively. PVA-TCCN-0.3 displayed the highest photocatalytic degradation efficiency for BTEX, suggesting the presence of an optimal PVA-to-solvent ratio for the synthesis of PVA-TCCNs. The average photocatalytic efficiencies for BTEX decreased from 11% to 4%, 59% to 18%, 89% to 37%, and 92% to 53%, respectively, when the flow rate was increased from 1.0 to 4.0 L min1. In addition, the average photocatalytic efficiencies for BTEX increased 11% to ~0%, 59% to 3%, 89% to 7%, and 92% to 13%, respectively, when the input concentration increased from 0.1 to 1.0 ppm. The prepared PVA-TCCNs were effective for the purification of airborne aromatics at indoor concentration levels, particularly when the operating conditions were optimized.
Keywords: Mixing ratio, nanofiber, polymer, reference photocatalyst.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2239188 Very High Speed Data Driven Dynamic NAND Gate at 22nm High K Metal Gate Strained Silicon Technology Node
Authors: Shobha Sharma, Amita Dev
Abstract:
Data driven dynamic logic is the high speed dynamic circuit with low area. The clock of the dynamic circuit is removed and data drives the circuit instead of clock for precharging purpose. This data driven dynamic nand gate is given static forward substrate biasing of Vsupply/2 as well as the substrate bias is connected to the input data, resulting in dynamic substrate bias. The dynamic substrate bias gives the shortest propagation delay with a penalty on the power dissipation. Propagation delay is reduced by 77.8% compared to the normal reverse substrate bias Data driven dynamic nand. Also dynamic substrate biased D3nand’s propagation delay is reduced by 31.26% compared to data driven dynamic nand gate with static forward substrate biasing of Vdd/2. This data driven dynamic nand gate with dynamic body biasing gives us the highest speed with no area penalty and finds its applications where power penalty is acceptable. Also combination of Dynamic and static Forward body bias can be used with reduced propagation delay compared to static forward biased circuit and with comparable increase in an average power. The simulations were done on hspice simulator with 22nm High-k metal gate strained Si technology HP models of Arizona State University, USA.Keywords: Data driven nand gate, dynamic substrate biasing, nand gate, static substrate biasing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617187 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band
Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant Kumar Srivastava
Abstract:
An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986 and 0.9214 respectively at HHpolarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373 and 0.9428 respectively.Keywords: Bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3231186 The Emergence of Smart Growth in Developed and Developing Countries and Its Possible Application in Kabul City, Afghanistan
Authors: Bashir Ahmad Amiri, Nsenda Lukumwena
Abstract:
The global trend indicates that more and more people live and will continue to live in urban areas. Today cities are expanding both in physical size and number due to the rapid population growth along with sprawl development, which caused the cities to expand beyond the growth boundary and exerting intense pressure on environmental resources specially farmlands to accommodate new housing and urban facilities. Also noticeable is the increase in urban decay along with the increase of slum dwellers present another challenge that most cities in developed and developing countries have to deal with. Today urban practitioners, researchers, planners, and decision-makers are seeking for alternative development and growth management policies to house the rising urban population and also cure the urban decay and slum issues turn to Smart Growth to achieve their goals. Many cities across the globe have adopted smart growth as an alternative growth management tool to deal with patterns and forms of development and to cure the rising urban and environmental problems. The method used in this study is a literature analysis method through reviewing various resources to highlight the potential benefits of Smart Growth in both developed and developing countries and analyze, to what extent it can be a strategic alternative for Afghanistan’s cities, especially the capital city. Hence a comparative analysis is carried on three countries, namely the USA, China, and India to identify the potential benefits of smart growth likely to serve as an achievable broad base for recommendations in different urban contexts.
Keywords: Growth management, housing, Kabul city, smart growth, urban-expansion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 953185 Automated Fact-Checking By Incorporating Contextual Knowledge and Multi-Faceted Search
Authors: Wenbo Wang, Yi-fang Brook Wu
Abstract:
The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state of the art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study presents a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive and authoritative data; 2) developing a search function to automatically select relevant, new and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that: 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graph in Wikidata to dynamically augment the representations of claims and references without introducing too much noises; II) exploring semantic relations in claims and references to further enhance fact-checking.
Keywords: Fact checking, claim verification, Deep Learning, Natural Language Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94184 Verification of Space System Dynamics Using the MATLAB Identification Toolbox in Space Qualification Test
Authors: Y. V. Kim
Abstract:
This article presents an approach with regards to the Functional Testing of Space System (SS) that could be a space vehicle (spacecraft-S/C) and/or its equipment and components – S/C subsystems. This test should finalize the Space Qualification Tests (SQT) campaign. It could be considered as a generic test and used for a wide class of SS that, from the point of view of System Dynamics and Control Theory, may be described by the ordinary differential equations. The suggested methodology is based on using semi-natural experiment laboratory stand that does not require complicated, precise and expensive technological control-verification equipment. However, it allows for testing totally assembled system during Assembling, Integration and Testing (AIT) activities at the final phase of SQT, involving system hardware (HW) and software (SW). The test physically activates system input (sensors) and output (actuators) and requires recording their outputs in real time. The data are then inserted in a laboratory computer, where it is post-experiment processed by the MATLAB/Simulink Identification Toolbox. It allows for estimating the system dynamics in the form of estimation of its differential equation coefficients through the verification experimental test and comparing them with expected mathematical model, prematurely verified by mathematical simulation during the design process. Mathematical simulation results presented in the article show that this approach could be applicable and helpful in SQT practice. Further semi-natural experiments should specify detail requirements for the test laboratory equipment and test-procedures.
Keywords: system dynamics, space system ground tests, space qualification, system dynamics identification, satellite attitude control, assembling integration and testing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 542183 Application of Activity-Based Costing Management System by Key Success Paths to Promote the Competitive Advantages and Operation Performance
Authors: Mei-Fang Wu, Shu-Li Wang, Feng-Tsung Cheng
Abstract:
Highly developed technology and highly competitive global market highlight the important role of competitive advantages and operation performances in sustainable company operation. Activity-Based Costing (ABC) provides accurate operation cost and operation performance information. Rich literatures provide relevant research with cases study on Activity-Based Costing application, but the research on cause relationship between key success factors and its specific outcome, such as profitability or share market are few. These relationships provide the ways to handle the key success factors to achieve the specific outcomes for ensuring to promote the competitive advantages and operation performances. The main purposes of this research are exploring the key success paths by Key Success Paths approach which will lead the ways to apply Activity-Base Costing. The Key Success Paths is the innovative method which is exploring the cause relationships and explaining what are the effects of key success factors to specific outcomes of Activity-Based Costing implementation. The cause relationships between key success factors and successful specific outcomes are Key Success Paths (KSPs). KSPs are the guidelines to lead the cost management strategies to achieve the goals of competitive advantages and operation performances. The research findings indicate that good management system design may affect the well outcomes of Activity-Based Costing application and achieve to outstanding competitive advantage, operating performance and profitability as well by KSPs exploration.Keywords: Activity-Based Costing, Key success factors, Key success paths approach, Key success paths, Key failure paths.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547