Search results for: maximal ratio combiner
212 Tele-Diagnosis System for Rural Thailand
Authors: C. Snae Namahoot, M. Brueckner
Abstract:
Thailand-s health system is challenged by the rising number of patients and decreasing ratio of medical practitioners/patients, especially in rural areas. This may tempt inexperienced GPs to rush through the process of anamnesis with the risk of incorrect diagnosis. Patients have to travel far to the hospital and wait for a long time presenting their case. Many patients try to cure themselves with traditional Thai medicine. Many countries are making use of the Internet for medical information gathering, distribution and storage. Telemedicine applications are a relatively new field of study in Thailand; the infrastructure of ICT had hampered widespread use of the Internet for using medical information. With recent improvements made health and technology professionals can work out novel applications and systems to help advance telemedicine for the benefit of the people. Here we explore the use of telemedicine for people with health problems in rural areas in Thailand and present a Telemedicine Diagnosis System for Rural Thailand (TEDIST) for diagnosing certain conditions that people with Internet access can use to establish contact with Community Health Centers, e.g. by mobile phone. The system uses a Web-based input method for individual patients- symptoms, which are taken by an expert system for the analysis of conditions and appropriate diseases. The analysis harnesses a knowledge base and a backward chaining component to find out, which health professionals should be presented with the case. Doctors have the opportunity to exchange emails or chat with the patients they are responsible for or other specialists. Patients- data are then stored in a Personal Health Record.Keywords: Biomedical engineering, data acquisition, expert system, information management system, and information retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2828211 Detection of Action Potentials in the Presence of Noise Using Phase-Space Techniques
Authors: Christopher Paterson, Richard Curry, Alan Purvis, Simon Johnson
Abstract:
Emerging Bio-engineering fields such as Brain Computer Interfaces, neuroprothesis devices and modeling and simulation of neural networks have led to increased research activity in algorithms for the detection, isolation and classification of Action Potentials (AP) from noisy data trains. Current techniques in the field of 'unsupervised no-prior knowledge' biosignal processing include energy operators, wavelet detection and adaptive thresholding. These tend to bias towards larger AP waveforms, AP may be missed due to deviations in spike shape and frequency and correlated noise spectrums can cause false detection. Also, such algorithms tend to suffer from large computational expense. A new signal detection technique based upon the ideas of phasespace diagrams and trajectories is proposed based upon the use of a delayed copy of the AP to highlight discontinuities relative to background noise. This idea has been used to create algorithms that are computationally inexpensive and address the above problems. Distinct AP have been picked out and manually classified from real physiological data recorded from a cockroach. To facilitate testing of the new technique, an Auto Regressive Moving Average (ARMA) noise model has been constructed bases upon background noise of the recordings. Along with the AP classification means this model enables generation of realistic neuronal data sets at arbitrary signal to noise ratio (SNR).Keywords: Action potential detection, Low SNR, Phase spacediagrams/trajectories, Unsupervised/no-prior knowledge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643210 The Effect of Zeolite on Sandy-Silt Soil Mechanical Properties
Authors: Shahryar Aftabi, Saeed Fathi, Mohammad H. Aminfar
Abstract:
It is well known that cemented sand is one of the best approaches for soil stabilization. In some cases, a blend of sand, cement and other pozzolan materials such as zeolite, nano-particles and fiber can be widely (commercially) available and be effectively used in soil stabilization, especially in road construction. In this research, we investigate the effects of CaO which is based on the geotechnical characteristics of zeolite composition with sandy silt soil. Zeolites have low amount of CaO in their structures, that is, varying from 3% to 10%, and by removing the cement paste, we want to investigate the effect of zeolite pozzolan without any activator on soil samples strength. In this research, experiments are concentrated on various weight percentages of zeolite in the soil to examine the effect of the zeolite on drainage shear strength and California Bearing Ratio (CBR) both with and without curing. The study also investigates their liquid limit and plastic limit behavior and makes a comparative result by using Feng's and Wroth-Wood's methods in fall cone (cone penetrometer) device; in the final the SEM images have been presented. The results show that by increasing the percentage of zeolite in without-curing samples, the fine zeolite particles increase some soil's strength, but in the curing-state we can see a relatively higher strength toward without-curing state, since the zeolites have no plastic behavior, the pozzolanic property of zeolites plays a much higher role than cementing properties. Indeed, it is better to combine zeolite particle with activator material such as cement or lime to gain better results.
Keywords: CBR, direct shear, fall-cone, sandy-silt, SEM, zeolite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 616209 Effect of Strain and Storage Period on Some Qualitative and Quantitative Traits of Table Eggs
Authors: Hani N. Hermiz, Sukar H. Ali
Abstract:
This study include the effect of strain and storage period and their interaction on some quantitative and qualitative traits and percentages of the egg components in the eggs collected at the start of production (at age 24 weeks). Eggs were divided into three storage periods (1, 7 and 14) days under refrigerator temperature (5- 7)0C. Fifty seven eggs obtained randomly from each strain including Isa Brown and Lohman White. General Linear Model within SAS programme was used to analyze the collected data and correlations between the studied traits were calculated for each strain.Average egg weight (EW), Haugh Unit (HU), yolk index (YI), yolk % (HP), albumin % (AP) and yolk to albumin ratio (YAR) was 56.629 gm, 87.968 %, 0.493, 22.13%, 67.74% and 32.76 respectively. Egg produced from ISA Brown surpassed those produced by Lohman White significantly (P<0.01) in EW (59.337 vs. 53.921 g) and AP (68.46 vs. 67.02 %), while Lohman White surpassed ISA Brown significantly (P<0.01) in HU (91.998 against 83.939 %), YI (0.498 against 0.487), YP (22.83 against 21.44%) and YAR (34.12 against 31.40). Storage period did not have any significant effect on EW and YI. Increasing the storage period caused a significant (P<0.01) decrease in HU. A non-significant increasing in YP and significant decreasing in AP % due to increasing storage period caused a significant increasing in YAR. The interaction between strain and storage period affect EW, HU and YI significantly (P <0.01), while its effect on YP, AP and YAR was not significant. Highest and significant (P<0.01) correlation was recorded between YP with YAR (0.99) in both strains, while the lowest values were between AP with YAR and being -0.97 and -0.95 in ISA Brown and Lohman White, respectively. The conclusion: increasing storage period caused a few decreasing in egg weight and this enabling the consumer to store eggs without any damage. Because of using the albumin in many food industries, so it is very important to focus on its weight. The correlations between some of the studied traits were significant, which means that selection for any trait will improve other traits.Keywords: Quality, Quantity, Storage period, Strain, Table egg
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659208 Energy Efficient Transmission of Image over DWT-OFDM System
Authors: Lakshmi Pujitha Dachuri, Nalini Uppala
Abstract:
In many applications retransmissions of lost packets are not permitted. OFDM is a multi-carrier modulation scheme having excellent performance which allows overlapping in frequency domain. With OFDM there is a simple way of dealing with multipath relatively simple DSP algorithms.
In this paper, an image frame is compressed using DWT, and the compressed data is arranged in data vectors, each with equal number of coefficients. These vectors are quantized and binary coded to get the bit steams, which are then packetized and intelligently mapped to the OFDM system. Based on one-bit channel state information at the transmitter, the descriptions in order of descending priority are assigned to the currently good channels such that poorer sub-channels can only affect the lesser important data vectors. We consider only one-bit channel state information available at the transmitter, informing only about the sub-channels to be good or bad. For a good sub-channel, instantaneous received power should be greater than a threshold Pth. Otherwise, the sub-channel is in fading state and considered bad for that batch of coefficients. In order to reduce the system power consumption, the mapped descriptions onto the bad sub channels are dropped at the transmitter. The binary channel state information gives an opportunity to map the bit streams intelligently and to save a reasonable amount of power. By using MAT LAB simulation we can analysis the performance of our proposed scheme, in terms of system energy saving without compromising the received quality in terms of peak signal-noise ratio.
Keywords: Binary channel state, Channel state feedback, DWT-OFDM system, Energy saving, Fading broadcast channel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2812207 Modal Analysis of Machine Tool Column Using Finite Element Method
Authors: Migbar Assefa
Abstract:
The performance of a machine tool is eventually assessed by its ability to produce a component of the required geometry in minimum time and at small operating cost. It is customary to base the structural design of any machine tool primarily upon the requirements of static rigidity and minimum natural frequency of vibration. The operating properties of machines like cutting speed, feed and depth of cut as well as the size of the work piece also have to be kept in mind by a machine tool structural designer. This paper presents a novel approach to the design of machine tool column for static and dynamic rigidity requirement. Model evaluation is done effectively through use of General Finite Element Analysis software ANSYS. Studies on machine tool column are used to illustrate finite element based concept evaluation technique. This paper also presents results obtained from the computations of thin walled box type columns that are subjected to torsional and bending loads in case of static analysis and also results from modal analysis. The columns analyzed are square and rectangle based tapered open column, column with cover plate, horizontal partitions and with apertures. For the analysis purpose a total of 70 columns were analyzed for bending, torsional and modal analysis. In this study it is observed that the orientation and aspect ratio of apertures have no significant effect on the static and dynamic rigidity of the machine tool structure.
Keywords: Finite Element Modeling, Modal Analysis, Machine tool structure, Static Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5037206 Simulation Based VLSI Implementation of Fast Efficient Lossless Image Compression System Using Adjusted Binary Code & Golumb Rice Code
Authors: N. Muthukumaran, R. Ravi
Abstract:
The Simulation based VLSI Implementation of FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression and is implemented in simulation oriented VLSI (Very Large Scale Integrated). To analysis the performance of Lossless image compression and to reduce the image without losing image quality and then implemented in VLSI based FELICS algorithm. In FELICS algorithm, which consists of simplified adjusted binary code for Image compression and these compression image is converted in pixel and then implemented in VLSI domain. This parameter is used to achieve high processing speed and minimize the area and power. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. This method can be further enhanced by multilevel parallelisms.
Keywords: Image compression, Pixel, Compression Ratio, Adjusted Binary code, Golumb Rice code, High Definition display, VLSI Implementation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073205 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.Keywords: Cross correlation, three-dimensional optical code division multiple access, spectral amplitude coding optical code division multiple access, multiple access interference, phase induced intensity noise, three-dimensional modified quadratic congruence/modified prime code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529204 Template-Based Object Detection through Partial Shape Matching and Boundary Verification
Authors: Feng Ge, Tiecheng Liu, Song Wang, Joachim Stahl
Abstract:
This paper presents a novel template-based method to detect objects of interest from real images by shape matching. To locate a target object that has a similar shape to a given template boundary, the proposed method integrates three components: contour grouping, partial shape matching, and boundary verification. In the first component, low-level image features, including edges and corners, are grouped into a set of perceptually salient closed contours using an extended ratio-contour algorithm. In the second component, we develop a partial shape matching algorithm to identify the fractions of detected contours that partly match given template boundaries. Specifically, we represent template boundaries and detected contours using landmarks, and apply a greedy algorithm to search the matched landmark subsequences. For each matched fraction between a template and a detected contour, we estimate an affine transform that transforms the whole template into a hypothetic boundary. In the third component, we provide an efficient algorithm based on oriented edge lists to determine the target boundary from the hypothetic boundaries by checking each of them against image edges. We evaluate the proposed method on recognizing and localizing 12 template leaves in a data set of real images with clutter back-grounds, illumination variations, occlusions, and image noises. The experiments demonstrate the high performance of our proposed method1.Keywords: Object detection, shape matching, contour grouping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2305203 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique
Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran
Abstract:
Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.Keywords: Channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733202 An Energy Aware Data Aggregation in Wireless Sensor Network Using Connected Dominant Set
Authors: M. Santhalakshmi, P Suganthi
Abstract:
Wireless Sensor Networks (WSNs) have many advantages. Their deployment is easier and faster than wired sensor networks or other wireless networks, as they do not need fixed infrastructure. Nodes are partitioned into many small groups named clusters to aggregate data through network organization. WSN clustering guarantees performance achievement of sensor nodes. Sensor nodes energy consumption is reduced by eliminating redundant energy use and balancing energy sensor nodes use over a network. The aim of such clustering protocols is to prolong network life. Low Energy Adaptive Clustering Hierarchy (LEACH) is a popular protocol in WSN. LEACH is a clustering protocol in which the random rotations of local cluster heads are utilized in order to distribute energy load among all sensor nodes in the network. This paper proposes Connected Dominant Set (CDS) based cluster formation. CDS aggregates data in a promising approach for reducing routing overhead since messages are transmitted only within virtual backbone by means of CDS and also data aggregating lowers the ratio of responding hosts to the hosts existing in virtual backbones. CDS tries to increase networks lifetime considering such parameters as sensors lifetime, remaining and consumption energies in order to have an almost optimal data aggregation within networks. Experimental results proved CDS outperformed LEACH regarding number of cluster formations, average packet loss rate, average end to end delay, life computation, and remaining energy computation.Keywords: Wireless sensor network, connected dominant set, clustering, data aggregation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129201 Determining G-γ Degradation Curve in Cohesive Soils by Dilatometer and in situ Seismic Tests
Authors: Ivandic Kreso, Spiranec Miljenko, Kavur Boris, Strelec Stjepan
Abstract:
This article discusses the possibility of using dilatometer tests (DMT) together with in situ seismic tests (MASW) in order to get the shape of G-g degradation curve in cohesive soils (clay, silty clay, silt, clayey silt and sandy silt). MASW test provides the small soil stiffness (Go from vs) at very small strains and DMT provides the stiffness of the soil at ‘work strains’ (MDMT). At different test locations, dilatometer shear stiffness of the soil has been determined by the theory of elasticity. Dilatometer shear stiffness has been compared with the theoretical G-g degradation curve in order to determine the typical range of shear deformation for different types of cohesive soil. The analysis also includes factors that influence the shape of the degradation curve (G-g) and dilatometer modulus (MDMT), such as the overconsolidation ratio (OCR), plasticity index (IP) and the vertical effective stress in the soil (svo'). Parametric study in this article defines the range of shear strain gDMT and GDMT/Go relation depending on the classification of a cohesive soil (clay, silty clay, clayey silt, silt and sandy silt), function of density (loose, medium dense and dense) and the stiffness of the soil (soft, medium hard and hard). The article illustrates the potential of using MASW and DMT to obtain G-g degradation curve in cohesive soils.
Keywords: Dilatometer testing, MASW testing, shear wave, soil stiffness, stiffness reduction, shear strain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 884200 Coupling Heat and Mass Transfer for Hydrogen-Assisted Self-Ignition Behaviors of Propane-Air Mixtures in Catalytic Micro-Channels
Authors: Junjie Chen, Deguang Xu
Abstract:
Transient simulation of the hydrogen-assisted self-ignition of propane-air mixtures were carried out in platinum-coated micro-channels from ambient cold-start conditions, using a two-dimensional model with reduced-order reaction schemes, heat conduction in the solid walls, convection and surface radiation heat transfer. The self-ignition behavior of hydrogen-propane mixed fuel is analyzed and compared with the heated feed case. Simulations indicate that hydrogen can successfully cause self-ignition of propane-air mixtures in catalytic micro-channels with a 0.2 mm gap size, eliminating the need for startup devices. The minimum hydrogen composition for propane self-ignition is found to be in the range of 0.8-2.8% (on a molar basis), and increases with increasing wall thermal conductivity, and decreasing inlet velocity or propane composition. Higher propane-air ratio results in earlier ignition. The ignition characteristics of hydrogen-assisted propane qualitatively resemble the selectively inlet feed preheating mode. Transient response of the mixed hydrogen- propane fuel reveals sequential ignition of propane followed by hydrogen. Front-end propane ignition is observed in all cases. Low wall thermal conductivities cause earlier ignition of the mixed hydrogen-propane fuel, subsequently resulting in low exit temperatures. The transient-state behavior of this micro-scale system is described, and the startup time and minimization of hydrogen usage are discussed.
Keywords: Micro-combustion, Self-ignition, Hydrogen addition, Heat transfer, Catalytic combustion, Transient simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1885199 Sustainable Building Technologies for Post-Disaster Temporary Housing: Integrated Sustainability Assessment and Life Cycle Assessment
Authors: S. M. Amin Hosseini, Oriol Pons, Albert de la Fuente
Abstract:
After natural disasters, displaced people (DP) require important numbers of housing units, which have to be erected quickly due to emergency pressures. These tight timeframes can cause the multiplication of the environmental construction impacts. These negative impacts worsen the already high energy consumption and pollution caused by the building sector. Indeed, post-disaster housing, which is often carried out without pre-planning, usually causes high negative environmental impacts, besides other economic and social impacts. Therefore, it is necessary to establish a suitable strategy to deal with this problem which also takes into account the instability of its causes, like changing ratio between rural and urban population. To this end, this study aims to present a model that assists decision-makers to choose the most suitable building technology for post-disaster housing units. This model focuses on the alternatives sustainability and fulfillment of the stakeholders’ satisfactions. Four building technologies have been analyzed to determine the most sustainability technology and to validate the presented model. In 2003, Bam earthquake DP had their temporary housing units (THUs) built using these four technologies: autoclaved aerated concrete blocks (AAC), concrete masonry unit (CMU), pressed reeds panel (PR), and 3D sandwich panel (3D). The results of this analysis confirm that PR and CMU obtain the highest sustainability indexes. However, the second life scenario of THUs could have considerable impacts on the results.
Keywords: Sustainability, post-disaster temporary housing, integrated value model for sustainability assessment (MIVES), life cycle assessment (LCA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632198 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid
Authors: Sunitha. S.L., V. Udayashankara
Abstract:
Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171197 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications
Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber
Abstract:
Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.
Keywords: Classification, High dimensional data, Machine learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384196 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics
Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo
Abstract:
Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.Keywords: Communication signal, feature extraction, holder coefficient, improved cloud model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 709195 The Expression of Lipoprotein Lipase Gene with Fat Accumulations and Serum Biochemical Levels in Betong (KU Line) and Broiler Chickens
Authors: W. Loongyai, N. Saengsawang, W. Danvilai, C. Kridtayopas, P. Sopannarath, C. Bunchasak
Abstract:
Betong chicken is a slow growing and a lean strain of chicken, while the rapid growth of broiler is accompanied by increased fat. We investigated the growth performance, fat accumulations, lipid serum biochemical levels and lipoprotein lipase (LPL) gene expression of female Betong (KU line) at the age of 4 and 6 weeks. A total of 80 female Betong chickens (KU line) and 80 female broiler chickens were reared under open system (each group had 4 replicates of 20 chicks per pen). The results showed that feed intake and average daily gain (ADG) of broiler chicken were significantly higher than Betong (KU line) (P < 0.01), while feed conversion ratio (FCR) of Betong (KU line) at week 6 were significantly lower than broiler chicken (P < 0.01) at 6 weeks. At 4 and 6 weeks, two birds per replicate were randomly selected and slaughtered. Carcass weight did not significantly differ between treatments; the percentage of abdominal fat and subcutaneous fat yield was higher in the broiler (P < 0.01) at 4 and 6 week. Total cholesterol and LDL level of broiler were higher than Betong (KU line) at 4 and 6 weeks (P < 0.05). Abdominal fat samples were collected for total RNA extraction. The cDNA was amplified using primers specific for LPL gene expression and analysed using real-time PCR. The results showed that the expression of LPL gene was not different when compared between Betong (KU line) and broiler chickens at the age of 4 and 6 weeks (P > 0.05). Our results indicated that broiler chickens had high growth rate and fat accumulation when compared with Betong (KU line) chickens, whereas LPL gene expression did not differ between breeds.
Keywords: Lipoprotein lipase gene, Betong (KU line), broiler, abdominal fat, gene expression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960194 Mix Proportioning and Strength Prediction of High Performance Concrete Including Waste Using Artificial Neural Network
Authors: D. G. Badagha, C. D. Modhera, S. A. Vasanwala
Abstract:
There is a great challenge for civil engineering field to contribute in environment prevention by finding out alternatives of cement and natural aggregates. There is a problem of global warming due to cement utilization in concrete, so it is necessary to give sustainable solution to produce concrete containing waste. It is very difficult to produce designated grade of concrete containing different ingredient and water cement ratio including waste to achieve desired fresh and harden properties of concrete as per requirement and specifications. To achieve the desired grade of concrete, a number of trials have to be taken, and then after evaluating the different parameters at long time performance, the concrete can be finalized to use for different purposes. This research work is carried out to solve the problem of time, cost and serviceability in the field of construction. In this research work, artificial neural network introduced to fix proportion of concrete ingredient with 50% waste replacement for M20, M25, M30, M35, M40, M45, M50, M55 and M60 grades of concrete. By using the neural network, mix design of high performance concrete was finalized, and the main basic mechanical properties were predicted at 3 days, 7 days and 28 days. The predicted strength was compared with the actual experimental mix design and concrete cube strength after 3 days, 7 days and 28 days. This experimentally and neural network based mix design can be used practically in field to give cost effective, time saving, feasible and sustainable high performance concrete for different types of structures.
Keywords: Artificial neural network, ANN, high performance concrete, rebound hammer, strength prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1210193 Digital Automatic Gain Control Integrated on WLAN Platform
Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel
Abstract:
In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.Keywords: WLAN, AGC, RSSI, baseband processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3949192 Preparation of Carbon Nanofiber Reinforced HDPE Using Dialkylimidazolium as a Dispersing Agent: Effect on Thermal and Rheological Properties
Authors: J. Samuel, S. Al-Enezi, A. Al-Banna
Abstract:
High-density polyethylene reinforced with carbon nanofibers (HDPE/CNF) have been prepared via melt processing using dialkylimidazolium tetrafluoroborate (ionic liquid) as a dispersion agent. The prepared samples were characterized by thermogravimetric (TGA) and differential scanning calorimetric (DSC) analyses. The samples blended with imidazolium ionic liquid exhibit higher thermal stability. DSC analysis showed clear miscibility of ionic liquid in the HDPE matrix and showed single endothermic peak. The melt rheological analysis of HDPE/CNF composites was performed using an oscillatory rheometer. The influence of CNF and ionic liquid concentration (ranging from 0, 0.5, and 1 wt%) on the viscoelastic parameters was investigated at 200 °C with an angular frequency range of 0.1 to 100 rad/s. The rheological analysis shows the shear-thinning behavior for the composites. An improvement in the viscoelastic properties was observed as the nanofiber concentration increases. The progress in the modulus values was attributed to the structural rigidity imparted by the high aspect ratio CNF. The modulus values and complex viscosity of the composites increased significantly at low frequencies. Composites blended with ionic liquid exhibit slightly lower values of complex viscosity and modulus over the corresponding HDPE/CNF compositions. Therefore, reduction in melt viscosity is an additional benefit for polymer composite processing as a result of wetting effect by polymer-ionic liquid combinations.
Keywords: HDPE, carbon nanofiber, ionic liquid, complex viscosity, modulus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757191 Quality Assessment of Hollow Sandcrete Blocks in Minna, Nigeria
Authors: M. Abdullahi, S. Sadiku, Bashar S. Mohammed, J. I. Aguwa
Abstract:
The properties of hollow sandcrete blocks produced in Minna, Nigeria are presented. Sandcrete block is made of cement, water and sand binded together in certain mix proportions. For the purpose of this work, fifty (50) commercial sandcrete block industries were visited in Minna, Nigeria to obtain block samples and aggregates used for the manufacture, and to take inventory of the mix composition and the production process. Sieve analysis tests were conduction on the soil sample from various block industries to ascertain their quality to be used for block making. The mix ratios were also investigated. Five (5) nine inches (9’’ or 225mm) blocks were obtained from each block industry and tested for dimensional compliance and compressive strength. The results of the soil test shows that the grading fall within the limit for natural aggregate and can easily are used to obtain workable mix. Physical examinations of the block sizes show slight deviation from the standard requirement in NIS 87:2000. Compressive strength of hollow sandcrete blocks in range of 0.12 N/mm2 to 0.54 N/mm2 was obtained which is below the recommendable value of 3.45 N/mm2 for load bearing hollow sandcrete blocks. This indicates that these blocks are below the standard for load-bearing sandcrete blocks and cannot be used as load bearing walling units. The mix composition also indicated low cement content resulting in low compressive strength. Most of the commercial block industries visited does not take curing very serious. Water were only sprinkled ones or twice before the blocks were stacked and made readily available for sale. It is recommended that a mix ratio of 1:4 to 1:6 should be used for the production of sandcrete blocks and proper curing practice should be adhered. Blocks should also be cured for 14 days before making them available for consumers.Keywords: Compressive strength, dimensions, mix proportions, sandcrete blocks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1991190 Optimization of Some Process Parameters to Produce Raisin Concentrate in Khorasan Region of Iran
Authors: Peiman Ariaii, Hamid Tavakolipour, Mohsen Pirdashti, Rabehe Izadi Amoli
Abstract:
Raisin Concentrate (RC) are the most important products obtained in the raisin processing industries. These RC products are now used to make the syrups, drinks and confectionery productions and introduced as natural substitute for sugar in food applications. Iran is a one of the biggest raisin exporter in the world but unfortunately despite a good raw material, no serious effort to extract the RC has been taken in Iran. Therefore, in this paper, we determined and analyzed affected parameters on extracting RC process and then optimizing these parameters for design the extracting RC process in two types of raisin (round and long) produced in Khorasan region. Two levels of solvent (1:1 and 2:1), three levels of extraction temperature (60°C, 70°C and 80°C), and three levels of concentration temperature (50°C, 60°C and 70°C) were the treatments. Finally physicochemical characteristics of the obtained concentrate such as color, viscosity, percentage of reduction sugar, acidity and the microbial tests (mould and yeast) were counted. The analysis was performed on the basis of factorial in the form of completely randomized design (CRD) and Duncan's multiple range test (DMRT) was used for the comparison of the means. Statistical analysis of results showed that optimal conditions for production of concentrate is round raisins when the solvent ratio was 2:1 with extraction temperature of 60°C and then concentration temperature of 50°C. Round raisin is cheaper than the long one, and it is more economical to concentrate production. Furthermore, round raisin has more aromas and the less color degree with increasing the temperature of concentration and extraction. Finally, according to mentioned factors the concentrate of round raisin is recommended.Keywords: Raisin concentrate, optimization, process parameters, round raisin, Iran.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600189 Enhancement of Mechanical Properties for Al-Mg-Si Alloy Using Equal Channel Angular Pressing
Authors: A. Nassef, S. Samy, W. H. El Garaihy
Abstract:
Equal channel angular pressing (ECAP) of commercial Al-Mg-Si alloy was conducted using two strain rates. The ECAP processing was conducted at room temperature and at 250°C. Route A was adopted up to a total number of four passes in the present work. Structural evolution of the aluminum alloy discs was investigated before and after ECAP processing using optical microscopy (OM). Following ECAP, simple compression tests and Vicker’s hardness were performed. OM micrographs showed that, the average grain size of the as-received Al-Mg-Si disc tends to be larger than the size of the ECAP processed discs. Moreover, significant difference in the grain morphologies of the as-received and processed discs was observed. Intensity of deformation was observed via the alignment of the Al-Mg-Si consolidated particles (grains) in the direction of shear, which increased with increasing the number of passes via ECAP. Increasing the number of passes up to 4 resulted in increasing the grains aspect ratio up to ~5. It was found that the pressing temperature has a significant influence on the microstructure, Hv-values, and compressive strength of the processed discs. Hardness measurements demonstrated that 1-pass resulted in increase of Hv-value by 42% compared to that of the as-received alloy. 4-passes of ECAP processing resulted in additional increase in the Hv-value. A similar trend was observed for the yield and compressive strength. Experimental data of the Hv-values demonstrated that there is a lack of any significant dependence on the processing strain rate.
Keywords: Al-Mg-Si alloy, Equal channel angular pressing, Grain refinement, Severe plastic deformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2246188 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images
Authors: U. Datta
Abstract:
The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.
Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731187 Evaluation of Groundwater and Seawater Intrusion at Tajoura Area, Northwest Libya
Authors: Abdalraheem Huwaysh, Yasmin ElAhmar
Abstract:
Water quality is an important factor that determines its usage for domestic, agricultural and industrial uses. This study was carried out through the Tajoura Area, Jifarah Plain, Northwest Libya. Chemical and physical parameters were measured and analyzed for groundwater samples collected in 2021 from 26 wells distributed throughout the investigation area. Overexploitation of groundwater caused considerable deterioration in the water quality, especially at Tajoura Town (20 km east of Tripoli). The aquifer shows an increase in salinization, which has reached an alarming level in many places during the past 25 years as a result of the seawater intrusion. Based on the WHO and Libyan standards, groundwater from the targeted area was not suitable for direct drinking purposes. Sodium is the dominant cation, while the dominant anion is chloride. Based on the Piper trilinear diagram, most of the groundwater samples (90%) were identified as sodium chloride type. The best groundwater quality exists at the southern part of the study area. Serious degradation in the water quality, expressed in salinity increase, occurs as we go towards the coastline. The abundance of NaCl waters is strong evidence to attribute the successive deterioration of the water quality to the seawater intrusion. Considering the values of Cl- concentration and the ratio of Cl-/HCO3-, about 70% of the groundwater samples were strongly affected by the saline water. Car wash stations in the study area as well as the unlined disposal pond used for the collection of untreated wastewaters, contribute significantly to the deterioration of water quality. In the area of interest (Tajoura), treatment of the groundwater before drinking is essential, and its quality needs to be routinely checked.
Keywords: Tajoura, groundwater, overexploitation, seawater intrusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 107186 Seismic Performance Evaluation of the Composite Structural System with Separated Gravity and Lateral Resistant Systems
Authors: Zi-Ang Li, Mu-Xuan Tao
Abstract:
During the process of the industrialization of steel structure housing, a composite structural system with separated gravity and lateral resistant systems has been applied in engineering practices, which consists of composite frame with hinged beam-column joints, steel brace and RC shear wall. As an attempt in steel structural system area, seismic performance evaluation of the separated composite structure is important for further application in steel housing. This paper focuses on the seismic performance comparison of the separated composite structural system and traditional steel frame-shear wall system under the same inter-story drift ratio (IDR) provision limit. The same architectural layout of a high-rise building is designed as two different structural systems at the same IDR level, and finite element analysis using pushover method is carried out. Static pushover analysis implies that the separated structural system exhibits different lateral deformation mode and failure mechanism with traditional steel frame-shear wall system. Different indexes are adopted and discussed in seismic performance evaluation, including IDR, safe factor (SF), shear wall damage, etc. The performance under maximum considered earthquake (MCE) demand spectrum shows that the shear wall damage of two structural systems are similar; the separated composite structural system exhibits less plastic hinges; and the SF index value of the separated composite structural system is higher than the steel frame shear wall structural system.
Keywords: Finite element analysis, seismic performance evaluation, separated composite structural system, static pushover analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 573185 Geosynthetic Reinforced Unpaved Road: Literature Study and Design Example
Authors: D. Jayalakshmi, S. Bhosale
Abstract:
This paper, in its first part, presents the state-of-the-art literature of design approaches for geosynthetic reinforced unpaved roads. The literature starting since 1970 and the critical appraisal of flexible pavement design by Giroud and Han (2004) and Jonathan Fannin (2006) is presented. The design example is illustrated for Indian conditions. The example emphasizes the results computed by Giroud and Han's (2004) design method with the Indian road congress guidelines by IRC SP 72 -2015. The input data considered are related to the subgrade soil condition of Maharashtra State in India. The unified soil classification of the subgrade soil is inorganic clay with high plasticity (CH), which is expansive with a California bearing ratio (CBR) of 2% to 3%. The example exhibits the unreinforced case and geotextile as reinforcement by varying the rut depth from 25 mm to 100 mm. The present result reveals the base thickness for the unreinforced case from the IRC design catalogs is in good agreement with Giroud and Han (2004) approach for a range of 75 mm to 100 mm rut depth. Since Giroud and Han (2004) method is applicable for both reinforced and unreinforced cases, for the same data with appropriate Nc factor, for the same rut depth, the base thickness for the reinforced case has arrived for the Indian condition. From this trial, for the CBR of 2%, the base thickness reduction due to geotextile inclusion is 35%. For the CBR range of 2% to 5% with different stiffness in geosynthetics, the reduction in base course thickness will be evaluated, and the validation will be executed by the full-scale accelerated pavement testing set up at the College of Engineering Pune (COE), India.
Keywords: Base thickness, design approach, equation, full scale accelerated pavement set up, Indian condition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 655184 Assessment of Urban Heat Island through Remote Sensing in Nagpur Urban Area Using Landsat 7 ETM+ Satellite Images
Authors: Meenal Surawar, Rajashree Kotharkar
Abstract:
Urban Heat Island (UHI) is found more pronounced as a prominent urban environmental concern in developing cities. To study the UHI effect in the Indian context, the Nagpur urban area has been explored in this paper using Landsat 7 ETM+ satellite images through Remote Sensing and GIS techniques. This paper intends to study the effect of LU/LC pattern on daytime Land Surface Temperature (LST) variation, contributing UHI formation within the Nagpur Urban area. Supervised LU/LC area classification was carried to study urban Change detection using ENVI 5. Change detection has been studied by carrying Normalized Difference Vegetation Index (NDVI) to understand the proportion of vegetative cover with respect to built-up ratio. Detection of spectral radiance from the thermal band of satellite images was processed to calibrate LST. Specific representative areas on the basis of urban built-up and vegetation classification were selected for observation of point LST. The entire Nagpur urban area shows that, as building density increases with decrease in vegetation cover, LST increases, thereby causing the UHI effect. UHI intensity has gradually increased by 0.7°C from 2000 to 2006; however, a drastic increase has been observed with difference of 1.8°C during the period 2006 to 2013. Within the Nagpur urban area, the UHI effect was formed due to increase in building density and decrease in vegetative cover.
Keywords: Land use, land cover, land surface temperature, remote sensing, urban heat island.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2610183 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: Clustering algorithm, potential function, speech signal, the UBSS model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 682