Search results for: analog filters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 507

Search results for: analog filters

417 Operation Parameters of Vacuum Cleaned Filters

Authors: Wilhelm Hoeflinger, Thomas Laminger, Johannes Wolfslehner

Abstract:

For vacuum cleaned dust filters, used e. g. in textile industry, there exist no calculation methods to determine design parameters (e. g. traverse speed of the nozzle, filter area...). In this work a method to calculate the optimum traverse speed of the nozzle of an industrial-size flat dust filter at a given mean pressure drop and filter face velocity was elaborated. Well-known equations for the design of a cleanable multi-chamber bag-house-filter were modified in order to take into account a continuously regeneration of a dust filter by a nozzle. Thereby, the specific filter medium resistance and the specific cake resistance values are needed which can be derived from filter tests under constant operation conditions. A lab-scale filter test rig was used to derive the specific filter media resistance value and the specific cake resistance value for vacuum cleaned filter operation. Three different filter media were tested and the determined parameters were compared to each other.

Keywords: design of dust filter, dust removing, filter regeneration, operation parameters

Procedia PDF Downloads 350
416 Human Lens Metabolome: A Combined LC-MS and NMR Study

Authors: Vadim V. Yanshole, Lyudmila V. Yanshole, Alexey S. Kiryutin, Timofey D. Verkhovod, Yuri P. Tsentalovich

Abstract:

Cataract, or clouding of the eye lens, is the leading cause of vision impairment in the world. The lens tissue have very specific structure: It does not have vascular system, the lens proteins – crystallins – do not turnover throughout lifespan. The protection of lens proteins is provided by the metabolites which diffuse inside the lens from the aqueous humor or synthesized in the lens epithelial layer. Therefore, the study of changes in the metabolite composition of a cataractous lens as compared to a normal lens may elucidate the possible mechanisms of the cataract formation. Quantitative metabolomic profiles of normal and cataractous human lenses were obtained with the combined use of high-frequency nuclear magnetic resonance (NMR) and ion-pairing high-performance liquid chromatography with high-resolution mass-spectrometric detection (LC-MS) methods. The quantitative content of more than fifty metabolites has been determined in this work for normal aged and cataractous human lenses. The most abundant metabolites in the normal lens are myo-inositol, lactate, creatine, glutathione, glutamate, and glucose. For the majority of metabolites, their levels in the lens cortex and nucleus are similar, with the few exceptions including antioxidants and UV filters: The concentrations of glutathione, ascorbate and NAD in the lens nucleus decrease as compared to the cortex, while the levels of the secondary UV filters formed from primary UV filters in redox processes increase. That confirms that the lens core is metabolically inert, and the metabolic activity in the lens nucleus is mostly restricted by protection from the oxidative stress caused by UV irradiation, UV filter spontaneous decomposition, or other factors. It was found that the metabolomic composition of normal and age-matched cataractous human lenses differ significantly. The content of the most important metabolites – antioxidants, UV filters, and osmolytes – in the cataractous nucleus is at least ten fold lower than in the normal nucleus. One may suppose that the majority of these metabolites are synthesized in the lens epithelial layer, and that age-related cataractogenesis might originate from the dysfunction of the lens epithelial cells. Comprehensive quantitative metabolic profiles of the human eye lens have been acquired for the first time. The obtained data can be used for the analysis of changes in the lens chemical composition occurring with age and with the cataract development.

Keywords: cataract, lens, NMR, LC-MS, metabolome

Procedia PDF Downloads 288
415 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 339
414 Foggy Image Restoration Using Neural Network

Authors: Khader S. Al-Aidmat, Venus W. Samawi

Abstract:

Blurred vision in the misty atmosphere is essential problem which needs to be resolved. To solve this problem, we developed a technique to restore foggy degraded image from its original version using Back-propagation neural network (BP-NN). The suggested technique is based on mapping between foggy scene and its corresponding original scene. Seven different approaches are suggested based on type of features used in image restoration. Features are extracted from spatial and spatial-frequency domain (using DCT). Each of these approaches comes with its own BP-NN architecture depending on type and number of used features. The weight matrix resulted from training each BP-NN represents a fog filter. The performance of these filters are evaluated empirically (using PSNR), and perceptually. By comparing the performance of these filters, the effective features that suits BP-NN technique for restoring foggy images is recognized. This system proved its effectiveness and success in restoring moderate foggy images.

Keywords: artificial neural network, discrete cosine transform, feed forward neural network, foggy image restoration

Procedia PDF Downloads 359
413 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm

Authors: Belgherbi Aicha, Bessaid Abdelhafid

Abstract:

In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 284
412 Software Verification of Systematic Resampling for Optimization of Particle Filters

Authors: Osiris Terry, Kenneth Hopkinson, Laura Humphrey

Abstract:

Systematic resampling is the most popularly used resampling method in particle filters. This paper seeks to further the understanding of systematic resampling by defining a formula made up of variables from the sampling equation and the particle weights. The formula is then verified via SPARK, a software verification language. The verified systematic resampling formula states that the minimum/maximum number of possible samples taken of a particle is equal to the floor/ceiling value of particle weight divided by the sampling interval, respectively. This allows for the creation of a randomness spectrum that each resampling method can fall within. Methods on the lower end, e.g., systematic resampling, have less randomness and, thus, are quicker to reach an estimate. Although lower randomness allows for error by having a larger bias towards the size of the weight, having this bias creates vulnerabilities to the noise in the environment, e.g., jamming. Conclusively, this is the first step in characterizing each resampling method. This will allow target-tracking engineers to pick the best resampling method for their environment instead of choosing the most popularly used one.

Keywords: SPARK, software verification, resampling, systematic resampling, particle filter, tracking

Procedia PDF Downloads 50
411 A Fast Convergence Subband BSS Structure

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each sub-band than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 523
410 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 42
409 A Subband BSS Structure with Reduced Complexity and Fast Convergence

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method, we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work, the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each subband than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 550
408 A Compact Via-less Ultra-Wideband Microstrip Filter by Utilizing Open-Circuit Quarter Wavelength Stubs

Authors: Muhammad Yasir Wadood, Fatemeh Babaeian

Abstract:

By developing ultra-wideband (UWB) systems, there is a high demand for UWB filters with low insertion loss, wide bandwidth, and having a planar structure which is compatible with other components of the UWB system. A microstrip interdigital filter is a great option for designing UWB filters. However, the presence of via holes in this structure creates difficulties in the fabrication procedure of the filter. Especially in the higher frequency band, any misalignment of the drilled via hole with the Microstrip stubs causes large errors in the measurement results compared to the desired results. Moreover, in this case (high-frequency designs), the line width of the stubs are very narrow, so highly precise small via holes are required to be implemented, which increases the cost of fabrication significantly. Also, in this case, there is a risk of having fabrication errors. To combat this issue, in this paper, a via-less UWB microstrip filter is proposed which is designed based on a modification of a conventional inter-digital bandpass filter. The novel approaches in this filter design are 1) replacement of each via hole with a quarter-wavelength open circuit stub to avoid the complexity of manufacturing, 2) using a bend structure to reduce the unwanted coupling effects and 3) minimising the size. Using the proposed structure, a UWB filter operating in the frequency band of 3.9-6.6 GHz (1-dB bandwidth) is designed and fabricated. The promising results of the simulation and measurement are presented in this paper. The selected substrate for these designs was Rogers RO4003 with a thickness of 20 mils. This is a common substrate in most of the industrial projects. The compact size of the proposed filter is highly beneficial for applications which require a very miniature size of hardware.

Keywords: band-pass filters, inter-digital filter, microstrip, via-less

Procedia PDF Downloads 131
407 Impact of Soot on NH3-SCR, NH3 Oxidation and NH3 TPD over Cu/SSZ-13 Zeolite

Authors: Lidija Trandafilovic, Kirsten Leistner, Marie Stenfeldt, Louise Olsson

Abstract:

Ammonia Selective Catalytic Reduction (NH3 SCR), is one of the most efficient post combustion abatement technologies for removing NOx from diesel engines. In order to remove soot, diesel particulate filters (DPF) are used. Recently, SCR coated filters have been introduced, which captures soot and simultaneously is active for ammonia SCR. There are large advantages with using SCR coated filters, such as decreased volume and also better light off characteristics, since both the SCR function as well as filter function is close to the engine. The objective of this work was to examine the effect of soot, produced using an engine bench, on Cu/SSZ-13 catalysts. The impact of soot on Cu/SSZ-13 in standard SCR, NH3 oxidation, NH3 temperature programmed desorption (TPD), as well as soot oxidation (with and without water) was examined using flow reactor measurements. In all experiments, prior to the soot loading, the fresh activity of Cu/SSZ-13 was recorded with stepwise increasing the temperature from 100°C till 600°C. Thereafter, the sample was loaded with soot and the experiment was repeated in the temperature range from 100°C till 700°C. The amount of CO and CO2 produced in each experiment is used to calculate the soot oxidized at each steady state temperature. The soot oxidized during the heating to next temperature step is included, e.g. the CO+CO2 produced when increasing the temperature to 600°C is added to the 600°C step. The influence of the two factors seem to be of the most importance to soot oxidation: ammonia and water. The influence of water on soot oxidation shift the maximum of CO2 and CO production towards lower temperatures, thus water increases the soot oxidation. Moreover, when adding ammonia to the system it is clear that the soot oxidation is lowered in the presence of ammonia, resulting in larger integrated COx at 500°C for O2+H2O, while opposite results at 600 °C was received where more was oxidised for O2+H2O+NH3 case. To conclude the presence of ammonia reduces the soot oxidation, which is in line with the ammonia TPD results where we found ammonia storage on the soot. Interestingly, during ammonia SCR conditions the activity for soot oxidation is regained at 500°C. At this high temperature the SCR zone is very short, thus the majority of the catalyst is not exposed to ammonia and therefore the inhibition effect of ammonia is not observed.

Keywords: NH3-SCR, Cu/SSZ-13, soot, zeolite

Procedia PDF Downloads 203
406 Changes in Pain Intensity of Musculoskeletal Disorders in Flight Attendants after Stretching Exercise Program

Authors: Maria Melania Muda, Retno Wibawanti, Retno Asti Werdhani

Abstract:

Background: Flight attendant (FA) is a job that is often exposed to ergonomic stressors; thus, they are very susceptible to symptoms of musculoskeletal disorders (MSDs). One of the ways to overcome musculoskeletal complaints is by stretching. This study aimed to examine the prevalence of MSDs and the effect of a 2-week stretching exercise program using the Indonesian Ministry of Health's stretching video on changes in musculoskeletal pain intensity in FA on commercial aircraft in Indonesia. Methods: A pre-post study was conducted using Nordic Musculoskeletal Questionnaire (NMQ) for MSDs’ identification and Visual Analog Scale (VAS) as pain intensity measurement. Data was collected and then analyzed using SPSS with Wilcoxon test. The change in pain intensity was considered significant if the p value was less than 0.05. Results: The results showed that 92% of the FA (n=75) had MSDs in at least 1 area of the body in the last 12 months. Thirty-four respondents participated as subjects. The complaint level score in 28 body areas before intervention was a median of 34 (29-84), with pain intensity of a median of 6 (2-9) became a median of 32 (28-67) and a median of 3 (0-9) after the intervention, respectively, with p-value <0.001. Conclusion: The stretching exercise program showed significant changes in the complaint level scores in 28 body areas (p < 0.001) and pain intensity before and after the stretching exercise intervention (p < 0.001).

Keywords: flight attendant, MSDs, Nordic Musculoskeletal Questionnaire, stretching exercise program, visual analog scale

Procedia PDF Downloads 54
405 Numerical Investigation into Capture Efficiency of Fibrous Filters

Authors: Jayotpaul Chaudhuri, Lutz Goedeke, Torsten Hallenga, Peter Ehrhard

Abstract:

Purification of gases from aerosols or airborne particles via filters is widely applied in the industry and in our daily lives. This separation especially in the micron and submicron size range is a necessary step to protect the environment and human health. Fibrous filters are often employed due to their low cost and high efficiency. For designing any filter the two most important performance parameters are capture efficiency and pressure drop. Since the capture efficiency is directly proportional to the pressure drop which leads to higher operating costs, a detailed investigation of the separation mechanism is required to optimize the filter designing, i.e., to have a high capture efficiency with a lower pressure drop. Therefore a two-dimensional flow simulation around a single fiber using Ansys CFX and Matlab is used to get insight into the separation process. Instead of simulating a solid fiber, the present Ansys CFX model uses a fictitious domain approach for the fiber by implementing a momentum loss model. This approach has been chosen to avoid creating a new mesh for different fiber sizes, thereby saving time and effort for re-meshing. In a first step, only the flow of the continuous fluid around the fiber is simulated in Ansys CFX and the flow field data is extracted and imported into Matlab and the particle trajectory is calculated in a Matlab routine. This calculation is a Lagrangian, one way coupled approach for particles with all relevant forces acting on it. The key parameters for the simulation in both Ansys CFX and Matlab are the porosity ε, the diameter ratio of particle and fiber D, the fluid Reynolds number Re, the Reynolds particle number Rep, the Stokes number St, the Froude number Fr and the density ratio of fluid and particle ρf/ρp. The simulation results were then compared to the single fiber theory from the literature.

Keywords: BBO-equation, capture efficiency, CFX, Matlab, fibrous filter, particle trajectory

Procedia PDF Downloads 170
404 Prevalence of Anxiety among End Stage Renal Disease Patients and Its Association with Patient Compliance to Hemodialysis and Physician Instructions

Authors: Mohammed Asiri, Saleh Alsuwayt, Mohammed Bin Mugren, Abdulmalik Almufarrih, Tariq Alotaibi, Saad Almodameg

Abstract:

Background: End-stage renal disease is a major public health concern with high incidence and mortality rate. Most of ESRD patients are on hemodialysis therapy which is a long-term treatment that disturbs patients’ lifestyle. As a result, he will be susceptible to develop psychiatric disorders like anxiety that may direct him to non-compliance on physician instructions and hemodialysis therapy. Although there are studies conducted on psychiatric issues in hemodialysis patients, but few studies focused on the effect of anxiety disorder and the patient’s compliance. Hence, we are interested in determining the prevalence of anxiety disorder among hemodialysis patients in Saudi Arabia, as well as in defining the correlation between anxiety disorder and compliance on physician instructions and hemodialysis therapy. We hypothesize that our study will show a higher prevalence of anxiety in hemodialysis patients than in general population. Also, we expect the anxiety to have a negative impact on their compliance. Methodology: We used a cross-sectional study design carried out at dialysis unit of four major hospitals in Riyadh, KSA. We interviewed 235 End Stage Renal Disease male and female patients who are on hemodialysis. We divided the patients into two categories according to their compliance. we used modified general questionnaire to get their demographic data, then we used a psychometric response scale called visual analog scale (VAS) to assess patient’s compliance to hemodialysis and physician’s instructions. Also, we used the Arabic validated version of the hospital anxiety and depression scale (HAD scale) used mainly for anxiety assessment. Results: The overall response rate was 54%. Respondents included 147 (62.6%) males and 88 (37.4%) females. The prevalence of anxiety among hemodialysis patients is 13.3%. According to visual analog scale, we found that 189 compliant patients and 45 non-compliant patients. For HAD scale, the mean ± standard deviation of the total score for females was (4.44 ± 4.7) and it’s higher than males which was 2.65 ± 3.08 (P-value= 0.002). The mean ± standard deviation of HAD score in the non-compliant group was (5.88  4.88) and it was higher than the compliant group (2.7  3.32) (P-value= 0.004). Among non-complaint group, 33.3% of anxious patients were males and 66.6% were females. There was a negative correlation between HAD score of anxiety and visual analog scale (R= - 0.285). Conclusion: We conclude that there is a high prevalence of anxiety among patients with End Stage Renal Disease that was higher in females with association of non-compliance to physician’s instructions and hemodialysis therapy.

Keywords: anxiety, end-stage renal disease, renal failure, anxiety disorder

Procedia PDF Downloads 242
403 The Excess Loop Delay Calibration in a Bandpass Continuous-Time Delta Sigma Modulators Based on Q-Enhanced LC Filter

Authors: Sorore Benabid

Abstract:

The Q-enhanced LC filters are the most used architecture in the Bandpass (BP) Continuous-Time (CT) Delta-Sigma (ΣΔ) modulators, due to their: high frequencies operation, high linearity than the active filters and a high quality factor obtained by Q-enhanced technique. This technique consists of the use of a negative resistance that compensate the ohmic losses in the on-chip inductor. However, this technique introduces a zero in the filter transfer function which will affect the modulator performances in term of Dynamic Range (DR), stability and in-band noise (Signal-to-Noise Ratio (SNR)). In this paper, we study the effect of this zero and we demonstrate that a calibration of the excess loop delay (ELD) is required to ensure the best performances of the modulator. System level simulations are done for a 2ndorder BP CT (ΣΔ) modulator at a center frequency of 300MHz. Simulation results indicate that the optimal ELD should be reduced by 13% to achieve the maximum SNR and DR compared to the ideal LC-based ΣΔ modulator.

Keywords: continuous-time bandpass delta-sigma modulators, excess loop delay, on-chip inductor, Q-enhanced LC filter

Procedia PDF Downloads 302
402 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems

Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang

Abstract:

Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.

Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel

Procedia PDF Downloads 66
401 A Novel Dual Band-pass filter Based On Coupling of Composite Right/Left Hand CPW and (CSRRs) Uses Ferrite Components

Authors: Mohammed Berka, Khaled Merit

Abstract:

Recent works on microwave filters show that the constituent materials such filters are very important in the design and realization. Several solutions have been proposed to improve the qualities of filtering. In this paper, we propose a new dual band-pass filter based on the coupling of a composite (CRLH) coplanar waveguide with complementary split ring resonators (CSRRs). The (CRLH) CPW is composed of two resonators, each one has an interdigital capacitor (CID) and two short-circuited stubs parallel to top ground plane. On the lower ground plane, we use defected ground structure technology (DGS) to engrave two (CSRRs) offered with different shapes and dimensions. Between the top ground plane and the substrate, we place a ferrite layer to control the electromagnetic coupling between (CRLH) CPW and (CSRRs). The global filter that has coplanar access will have a dual band-pass behavior around the magnetic resonances of (CSRRs). Since there’s no scientific or experimental result in the literature for this kind of complicated structure, it was necessary to perform simulation using HFSS Ansoft designer.

Keywords: complementary split ring resonators, coplanar waveguide, ferrite, filter, stub.

Procedia PDF Downloads 377
400 Human Gesture Recognition for Real-Time Control of Humanoid Robot

Authors: S. Aswath, Chinmaya Krishna Tilak, Amal Suresh, Ganesh Udupa

Abstract:

There are technologies to control a humanoid robot in many ways. But the use of Electromyogram (EMG) electrodes has its own importance in setting up the control system. The EMG based control system helps to control robotic devices with more fidelity and precision. In this paper, development of an electromyogram based interface for human gesture recognition for the control of a humanoid robot is presented. To recognize control signs in the gestures, a single channel EMG sensor is positioned on the muscles of the human body. Instead of using a remote control unit, the humanoid robot is controlled by various gestures performed by the human. The EMG electrodes attached to the muscles generates an analog signal due to the effect of nerve impulses generated on moving muscles of the human being. The analog signals taken up from the muscles are supplied to a differential muscle sensor that processes the given signal to generate a signal suitable for the microcontroller to get the control over a humanoid robot. The signal from the differential muscle sensor is converted to a digital form using the ADC of the microcontroller and outputs its decision to the CM-530 humanoid robot controller through a Zigbee wireless interface. The output decision of the CM-530 processor is sent to a motor driver in order to control the servo motors in required direction for human like actions. This method for gaining control of a humanoid robot could be used for performing actions with more accuracy and ease. In addition, a study has been conducted to investigate the controllability and ease of use of the interface and the employed gestures.

Keywords: electromyogram, gesture, muscle sensor, humanoid robot, microcontroller, Zigbee

Procedia PDF Downloads 381
399 Improving the Frequency Response of a Circular Dual-Mode Resonator with a Reconfigurable Bandwidth

Authors: Muhammad Haitham Albahnassi, Adnan Malki, Shokri Almekdad

Abstract:

In this paper, a method for reconfiguring bandwidth in a circular dual-mode resonator is presented. The method concerns the optimized geometry of a structure that may be used to host the tuning elements, which are typically RF (Radio Frequency) switches. The tuning elements themselves, and their performance during tuning, are not the focus of this paper. The designed resonator is able to reconfigure its fractional bandwidth by adjusting the inter-coupling level between the degenerate modes, while at the same time improving its response by adjusting the external-coupling level and keeping the center frequency fixed. The inter-coupling level has been adjusted by changing the dimensions of the perturbation element, while the external-coupling level has been adjusted by changing one of the feeder dimensions. The design was arrived at via optimization. Agreeing simulation and measurement results of the designed and implemented filters showed good improvements in return loss values and the stability of the center frequency.

Keywords: dual-mode resonators, perturbation theory, reconfigurable filters, software defined radio, cognitine radio

Procedia PDF Downloads 124
398 Constraining the Potential Nickel Laterite Area Using Geographic Information System-Based Multi-Criteria Rating in Surigao Del Sur

Authors: Reiner-Ace P. Mateo, Vince Paolo F. Obille

Abstract:

The traditional method of classifying the potential mineral resources requires a significant amount of time and money. In this paper, an alternative way to classify potential mineral resources with GIS application in Surigao del Sur. The three (3) analog map data inputs integrated to GIS are geologic map, topographic map, and land cover/vegetation map. The indicators used in the classification of potential nickel laterite integrated from the analog map data inputs are a geologic indicator, which is the presence of ultramafic rock from the geologic map; slope indicator and the presence of plateau edges from the topographic map; areas of forest land, grassland, and shrublands from the land cover/vegetation map. The potential mineral of the area was classified from low up to very high potential. The produced mineral potential classification map of Surigao del Sur has an estimated 4.63% low nickel laterite potential, 42.15% medium nickel laterite potential, 43.34% high nickel laterite potential, and 9.88% very high nickel laterite from its ultramafic terrains. For the validation of the produced map, it was compared with known occurrences of nickel laterite in the area using a nickel mining tenement map from the area with the application of remote sensing. Three (3) prominent nickel mining companies were delineated in the study area. The generated potential classification map of nickel-laterite in Surigao Del Sur may be of aid to the mining companies which are currently in the exploration phase in the study area. Also, the currently operating nickel mines in the study area can help to validate the reliability of the mineral classification map produced.

Keywords: mineral potential classification, nickel laterites, GIS, remote sensing, Surigao del Sur

Procedia PDF Downloads 92
397 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively new concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification by means of a global navigation satellite system (GNSS), cellular connectivity by means of 3G/long-term evolution (LTE) modem, connectivity to on-board diagnostics (OBD), and connectivity to analog and digital sensors by means of a novel design of expansion board. Specifically, the later provides eight analog plus three digital sensor channels, as well as one on-board temperature / relative humidity sensor. The specific device offers a number of adaptability features based on appropriate zero-ohm resistor placement and appropriate value selection for limited number of passive components. For example, although in the standard configuration four voltage analog channels with constant voltage sources for the power supply of the corresponding sensors are available, up to two of these voltage channels can be converted to provide power to the connected sensors by means of corresponding constant current source circuits, whereas all parameters of analog sensor power supply and matching circuits are fully configurable offering the advantage of covering a wide variety of industrial sensors. Note that a key feature of the proposed sensor node, ensuring the reliable operation of the connected sensors, is the appropriate supply of external power to the connected sensors and their proper matching to the IoT sensor node. In standard mode, the IoT sensor node communicates to the data center through 3G/LTE, transmitting all digital/digitized sensor data, IoT device identity, and position. Moreover, the proposed IoT sensor node offers WiFi connectivity to mobile devices (smartphones, tablets) equipped with an appropriate application for the manual registration of vehicle- and driver-specific information, and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware. It is programmed with a high-level language (Python) on top of a modern operating system (Linux). Acknowledgment: This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T1EDK- 01359, IntelligentLogger).

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia PDF Downloads 121
396 Modelling of Filters CO2 (Carbondioxide) and CO (Carbonmonoxide) Portable in Motor Vehicle's Exhaust with Absorbent Chitosan

Authors: Yuandanis Wahyu Salam, Irfi Panrepi, Nuraeni

Abstract:

The increased of greenhouse gases, that is CO2 (carbondioxide) in atmosphere induce the rising of earth’s surface average temperature. One of the largest contributors to greenhouse gases is motor vehicles. Smoke which is emitted by motor’s exhaust containing gases such as CO2 (carbondioxide) and CO (carbon monoxide). Chemically, chitosan is cellulose like plant fiber that has the ability to bind like absorbant foam. Chitosan is a natural antacid (absorb toxins), when chitosan is spread over the surface of water, chitosan is able to absorb fats, oils, heavy metals, and other toxic substances. Judging from the nature of chitosan is able to absorb various toxic substances, it is expected that chitosan is also able to filter out gas emission from the motor vehicles. This study designing a carbondioxide filter in the exhaust of motor vehicles using chitosan as its absorbant. It aims to filter out gases in the exhaust so that CO2 and CO can be reducted before emitted by exhaust. Form of this reseach is study of literature and applied with experimental research of tool manufacture. Data collected through documentary studies by studying books, magazines, thesis, search on the internet as well as the relevant reference. This study will produce a filters which has main function to filter out CO2 and CO emissions that generated by vehicle’s exhaust and can be used as portable.

Keywords: filter, carbon, carbondioxide, exhaust, chitosan

Procedia PDF Downloads 322
395 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 258
394 Strategy of Inventory Analysis with Economic Order Quantity and Quick Response: Case on Filter Inventory for Heavy Equipment in Indonesia

Authors: Lim Sanny, Felix Christian

Abstract:

The use of heavy equipment in Indonesia is always increasing. Cost reduction in procurement of spare parts is the aim of the company. The spare parts in this research are focused in the kind of filters. On the early step, the choosing of priority filter will be studied further by using the ABC analysis. To find out future demand of the filter, this research is using demand forecast by utilizing the QM software for windows. And to find out the best method of inventory control for each kind of filter is by comparing the total cost of Economic Order Quantity and Quick response inventory method. For the three kind of filters which are Cartridge, Engine oil – pn : 600-211-123, Element, Transmission – pn : 424-16-11140, and Element, Hydraulic – pn : 07063-01054, the best forecasting method is Linear regression. The best method for inventory control of Cartridge, Engine oil – pn : 600-211-123 and Element, Transmission – pn : 424-16-11140, is Quick Response Inventory, while the best method for Element, Hydraulic – pn : 07063-01054 is Economic Order Quantity.

Keywords: strategy, inventory, ABC analysis, forecasting, economic order quantity, quick response inventory

Procedia PDF Downloads 339
393 Efficacy of Biofeedback-Assisted Pelvic Floor Muscle Training on Postoperative Stress Urinary Incontinence

Authors: Asmaa M. El-Bandrawy, Afaf M. Botla, Ghada E. El-Refaye, Hassan O. Ghareeb

Abstract:

Background: Urinary incontinence is a common problem among adults. Its incidence increases with age and it is more frequent in women. Pelvic floor muscle training (PFMT) is the first-line therapy in the treatment of pelvic floor dysfunction (PFD) either alone or combined with biofeedback-assisted PFMT. The aim of the work: The purpose of this study is to evaluate the efficacy of biofeedback-assisted PFMT in postoperative stress urinary incontinence. Settings and Design: A single blind controlled trial design was. Methods and Material: This study was carried out in 30 volunteer patients diagnosed as severe degree of stress urinary incontinence and they were admitted to surgical treatment. They were divided randomly into two equal groups: (Group A) consisted of 15 patients who had been treated with post-operative biofeedback-assisted PFMT and home exercise program (Group B) consisted of 15 patients who had been treated with home exercise program only. Assessment of all patients in both groups (A) and (B) was carried out before and after the treatment program by measuring intra-vaginal pressure in addition to the visual analog scale. Results: At the end of the treatment program, there was a highly statistically significant difference between group (A) and group (B) in the intra-vaginal pressure and the visual analog scale favoring the group (A). Conclusion: biofeedback-assisted PFMT is an effective method for the symptomatic relief of post-operative female stress urinary incontinence.

Keywords: stress urinary incontinence, pelvic floor muscles, pelvic floor exercises, biofeedback

Procedia PDF Downloads 279
392 Active Filtration of Phosphorus in Ca-Rich Hydrated Oil Shale Ash Filters: The Effect of Organic Loading and Form of Precipitated Phosphatic Material

Authors: Päärn Paiste, Margit Kõiv, Riho Mõtlep, Kalle Kirsimäe

Abstract:

For small-scale wastewater management, the treatment wetlands (TWs) as a low cost alternative to conventional treatment facilities, can be used. However, P removal capacity of TW systems is usually problematic. P removal in TWs is mainly dependent on the physico–chemical and hydrological properties of the filter material. Highest P removal efficiency has been shown trough Ca-phosphate precipitation (i.e. active filtration) in Ca-rich alkaline filter materials, e.g. industrial by-products like hydrated oil shale ash (HOSA), metallurgical slags. In this contribution we report preliminary results of a full-scale TW system using HOSA material for P removal for a municipal wastewater at Nõo site, Estonia. The main goals of this ongoing project are to evaluate: a) the long-term P removal efficiency of HOSA using real waste water; b) the effect of high organic loading rate; c) variable P-loading effects on the P removal mechanism (adsorption/direct precipitation); and d) the form and composition of phosphate precipitates. Onsite full-scale experiment with two concurrent filter systems for treatment of municipal wastewater was established in September 2013. System’s pretreatment steps include septic tank (2 m2) and vertical down-flow LECA filters (3 m2 each), followed by horizontal subsurface HOSA filters (effective volume 8 m3 each). Overall organic and hydraulic loading rates of both systems are the same. However, the first system is operated in a stable hydraulic loading regime and the second in variable loading regime that imitates the wastewater production in an average household. Piezometers for water and perforated sample containers for filter material sampling were incorporated inside the filter beds to allow for continuous in-situ monitoring. During the 18 months of operation the median removal efficiency (inflow to outflow) of both systems were over 99% for TP, 93% for COD and 57% for TN. However, we observed significant differences in the samples collected in different points inside the filter systems. In both systems, we observed development of preferred flow paths and zones with high and low loadings. The filters show formation and a gradual advance of a “dead” zone along the flow path (zone with saturated filter material characterized by ineffective removal rates), which develops more rapidly in the system working under variable loading regime. The formation of the “dead” zone is accompanied by the growth of organic substances on the filter material particles that evidently inhibit the P removal. Phase analysis of used filter materials using X-ray diffraction method reveals formation of minor amounts of amorphous Ca-phosphate precipitates. This finding is supported by ATR-FTIR and SEM-EDS measurements, which also reveal Ca-phosphate and authigenic carbonate precipitation. Our first experimental results demonstrate that organic pollution and loading regime significantly affect the performance of hydrated ash filters. The material analyses also show that P is incorporated into a carbonate substituted hydroxyapatite phase.

Keywords: active filtration, apatite, hydrated oil shale ash, organic pollution, phosphorus

Procedia PDF Downloads 249
391 Acoustic Echo Cancellation Using Different Adaptive Algorithms

Authors: Hamid Sharif, Nazish Saleem Abbas, Muhammad Haris Jamil

Abstract:

An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. Adaptive filtering constitutes one of the core technologies in digital signal processing and finds numerous application areas in science as well as in industry. Adaptive filtering techniques are used in a wide range of applications, including adaptive noise cancellation and echo cancellation. Acoustic echo cancellation is a common occurrence in today’s telecommunication systems. The signal interference caused by acoustic echo is distracting to both users and causes a reduction in the quality of the communication. In this paper, we review different techniques of adaptive filtering to reduce this unwanted echo. In this paper, we see the behavior of techniques and algorithms of adaptive filtering like Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Variable Step-Size Least Mean Square (VSLMS), Variable Step-Size Normalized Least Mean Square (VSNLMS), New Varying Step Size LMS Algorithm (NVSSLMS) and Recursive Least Square (RLS) algorithms to reduce this unwanted echo, to increase communication quality.

Keywords: adaptive acoustic, echo cancellation, LMS algorithm, adaptive filter, normalized least mean square (NLMS), variable step-size least mean square (VSLMS)

Procedia PDF Downloads 51
390 High-Efficiency Comparator for Low-Power Application

Authors: M. Yousefi, N. Nasirzadeh

Abstract:

In this paper, dynamic comparator structure employing two methods for power consumption reduction with applications in low-power high-speed analog-to-digital converters have been presented. The proposed comparator has low consumption thanks to power reduction methods. They have the ability for offset adjustment. The comparator consumes 14.3 μW at 100 MHz which is equal to 11.8 fJ. The comparator has been designed and simulated in 180 nm CMOS. Layouts occupy 210 μm2.

Keywords: efficiency, comparator, power, low

Procedia PDF Downloads 325
389 Blood Volume Pulse Extraction for Non-Contact Photoplethysmography Measurement from Facial Images

Authors: Ki Moo Lim, Iman R. Tayibnapis

Abstract:

According to WHO estimation, 38 out of 56 million (68%) global deaths in 2012, were due to noncommunicable diseases (NCDs). To avert NCD, one of the solutions is early detection of diseases. In order to do that, we developed 'U-Healthcare Mirror', which is able to measure vital sign such as heart rate (HR) and respiration rate without any physical contact and consciousness. To measure HR in the mirror, we utilized digital camera. The camera records red, green, and blue (RGB) discoloration from user's facial image sequences. We extracted blood volume pulse (BVP) from the RGB discoloration because the discoloration of the facial skin is accordance with BVP. We used blind source separation (BSS) to extract BVP from the RGB discoloration and adaptive filters for removing noises. We utilized singular value decomposition (SVD) method to implement the BSS and the adaptive filters. HR was estimated from the obtained BVP. We did experiment for HR measurement by using our method and previous method that used independent component analysis (ICA) method. We compared both of them with HR measurement from commercial oximeter. The experiment was conducted under various distance between 30~110 cm and light intensity between 5~2000 lux. For each condition, we did measurement 7 times. The estimated HR showed 2.25 bpm of mean error and 0.73 of pearson correlation coefficient. The accuracy has improved compared to previous work. The optimal distance between the mirror and user for HR measurement was 50 cm with medium light intensity, around 550 lux.

Keywords: blood volume pulse, heart rate, photoplethysmography, independent component analysis

Procedia PDF Downloads 310
388 Combined Treatment of Estrogen-Receptor Positive Breast Microtumors with 4-Hydroxytamoxifen and Novel Non-Steroidal Diethyl Stilbestrol-Like Analog Produces Enhanced Preclinical Treatment Response and Decreased Drug Resistance

Authors: Sarah Crawford, Gerry Lesley

Abstract:

This research is a pre-clinical assessment of anti-cancer effects of novel non-steroidal diethyl stilbestrol-like estrogen analogs in estrogen-receptor positive/ progesterone-receptor positive human breast cancer microtumors of MCF 7 cell line. Tamoxifen analog formulation (Tam A1) was used as a single agent or in combination with therapeutic concentrations of 4-hydroxytamoxifen, currently used as a long-term treatment for the prevention of breast cancer recurrence in women with estrogen receptor positive/ progesterone receptor positive malignancies. At concentrations ranging from 30-50 microM, Tam A1 induced microtumor disaggregation and cell death. Incremental cytotoxic effects correlated with increasing concentrations of Tam A1. Live tumor microscopy showed that microtumos displayed diffuse borders and substrate-attached cells were rounded-up and poorly adherent. A complete cytotoxic effect was observed using 40-50 microM Tam A1 with time course kinetics similar to 4-hydroxytamoxifen. Combined treatment with TamA1 (30-50 microM) and 4-hydroxytamoxifen (10-15 microM) induced a highly cytotoxic, synergistic combined treatment response that was more rapid and complete than using 4-hydroxytamoxifen as a single agent therapeutic. Microtumors completely dispersed or formed necrotic foci indicating a highly cytotoxic combined treatment response. Moreover, breast cancer microtumors treated with both 4-hydroxytamoxifen and Tam A1 displayed lower levels of long-term post-treatment regrowth, a critical parameter of primary drug resistance, than observed for 4-hydroxytamoxifen when used as a single agent therapeutic. Tumor regrowth at 6 weeks post-treatment with either single agent 4-hydroxy tamoxifen, Tam A1 or a combined treatment was assessed for the development of drug resistance. Breast cancer cells treated with both 4-hydroxytamoxifen and Tam A1 displayed significantly lower levels of post-treatment regrowth, indicative of decreased drug resistance, than observed for either single treatment modality. The preclinical data suggest that combined treatment involving the use of tamoxifen analogs may be a novel clinical approach for long-term maintenance therapy in patients with estrogen-receptor positive/progesterone-receptor positive breast cancer receiving hormonal therapy to prevent disease recurrence. Detailed data on time-course, IC50 and tumor regrowth assays post- treatment as well as a proposed mechanism of action to account for observed synergistic drug effects will be presented.

Keywords: 4-hydroxytamoxifen, tamoxifen analog, drug-resistance, microtumors

Procedia PDF Downloads 35