Search results for: optimum signal approximation
3182 Monitoring of Spectrum Usage and Signal Identification Using Cognitive Radio
Authors: O. S. Omorogiuwa, E. J. Omozusi
Abstract:
The monitoring of spectrum usage and signal identification, using cognitive radio, is done to identify frequencies that are vacant for reuse. It has been established that ‘internet of things’ device uses secondary frequency which is free, thereby facing the challenge of interference from other users, where some primary frequencies are not being utilised. The design was done by analysing a specific frequency spectrum, checking if all the frequency stations that range from 87.5-108 MHz are presently being used in Benin City, Edo State, Nigeria. From the results, it was noticed that by using Software Defined Radio/Simulink, we were able to identify vacant frequencies in the range of frequency under consideration. Also, we were able to use the significance of energy detection threshold to reuse this vacant frequency spectrum, when the cognitive radio displays a zero output (that is decision H0), meaning that the channel is unoccupied. Hence, the analysis was able to find the spectrum hole and identify how it can be reused.Keywords: spectrum, interference, telecommunication, cognitive radio, frequency
Procedia PDF Downloads 2243181 Effect of Compaction Energy on the Compaction of Soils with Low Water Content in the Semi-arid Region of Chlef
Authors: Obeida Aiche, Mohamed Khiatine, Medjnoun Amal, Ramdane Bahar
Abstract:
Soil compaction is one of the most challenging tasks in the construction of road embankments, railway platforms, and earth dams. Stability and durability are mainly related to the nature of the materials used and the type of soil in place. However, nature does not always offer the engineer materials with the right water content, especially in arid and semi-arid regions where obtaining the optimum Proctor water content requires the addition of considerable quantities of water. The current environmental context does not allow for the rational use of water, especially in arid and semi-arid regions, where it is preferable to preserve water resources for the benefit of the local population. Low water compaction can be an interesting approach as it promotes the reuse of earthworks materials in their dry or very dry state. Thanks to techniques in the field of soil compaction, such as vibratory compactors, which have made it possible to increase the compaction energy considerably, it is possible for some materials to obtain a satisfactory quality by compacting at low water contents or at least lower than the optimum determined by the Proctor test. This communication deals with the low water content compaction of soils in the semi-arid zone of the Chlef region in Algeria by increasing the compaction energy.Keywords: compaction, soil, low water content, compaction energy
Procedia PDF Downloads 1103180 Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement
Authors: M. Z. Kurian, M. V. Chidananda Murthy, H. S. Guruprasad
Abstract:
An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested.Keywords: advanced b-spline, image super-resolution, mean square error (MSE), peak signal to noise ratio (PSNR), resolution down converter
Procedia PDF Downloads 3993179 FEM for Stress Reduction by Optimal Auxiliary Holes in a Loaded Plate with Elliptical Hole
Authors: Basavaraj R. Endigeri, S. G. Sarganachari
Abstract:
Steel is widely used in machine parts, structural equipment and many other applications. In many steel structural elements, holes of different shapes and orientations are made with a view to satisfy the design requirements. The presence of holes in steel elements creates stress concentration, which eventually reduce the mechanical strength of the structure. Therefore, it is of great importance to investigate the state of stress around the holes for the safety and properties design of such elements. By literature survey, it is known that till date, there is no analytical solution to reduce the stress concentration by providing auxiliary holes at a definite location and radii in a steel plate. The numerical method can be used to determine the optimum location and radii of auxiliary holes. In the present work plate with an elliptical hole, for a steel material subjected to uniaxial load is analyzed and the effect of stress concentration is graphically represented .The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 11.0 is used to analyse the steel plate. The analysis is carried out using a plane 42 element. Further the ANSYS optimization model is used to determine the location and radii for optimum values of auxiliary hole to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. The results of this study are in the form of the graphs for determining the locations and diameter of optimal auxiliary holes. The graph of stress concentration v/s central hole diameter to plate width ratio. The Finite Elements results of the study indicates that the stress concentration effect of central elliptical hole in an uniaxial loaded plate can be reduced by introducing auxiliary holes on either side of the central circular hole.Keywords: finite element method, optimization, stress concentration factor, auxiliary holes
Procedia PDF Downloads 4533178 Determination of the Axial-Vector from an Extended Linear Sigma Model
Authors: Tarek Sayed Taha Ali
Abstract:
The dependence of the axial-vector coupling constant gA on the quark masses has been investigated in the frame work of the extended linear sigma model. The field equations have been solved in the mean-field approximation. Our study shows a better fitting to the experimental data compared with the existing models.Keywords: extended linear sigma model, nucleon properties, axial coupling constant, physic
Procedia PDF Downloads 4453177 Evaluation of Labelling Conditions, Quality Control, and Biodistribution Study of 99mTc- D-Aminolevulinic Acid (5-ALA)
Authors: Kalimullah Khan, Samina Roohi, Mohammad Rafi, Rizwana Zahoor
Abstract:
Labeling of 5-Aminolevulinic acid (5-ALA) with 99 mTc was achieved by using tin chloride dihydrate (Sncl2.2H2O) as reducing agent. Radiochemical purity and labeling efficiency was determined by Whattman paper No.3 and instant thin layer chromatographic strips impregnated with silica gel (ITLC/SG). Labeling efficiency was dependent on many parameters such as amount of ligand, reducing agent, pH, and incubation time. Therefore, optimum conditions for maximum labeling were selected. Stability of 99 mTc- 5-ALA was also checked in fresh human serum. Tissue bio-distribution of 99 mTc-5-ALA was evaluated in Spargue Dawley rats. 5-ALA was 98% labeled with 99 mTc under optimum conditions, i.e. 100µg of 5-ALA, pH: 4, 10µg of Sncl2.2H2O and 30 minutes incubation at room temperature. 99 mTc labelled 5- ALA remained stable for 24 hours in human serum. Bio-distribution study (%ID/gm) in rats revealed that maximum accumulation of 99 mTc-5-ALA was in liver, spleen, stomach and intestine after half hour, 4 hours, and 24 hours. Significant activity in bladder and urine indicated urinary mode of excretion.Keywords: 99mTc-ALA, aminolevulinic acid, quality control, radiopharmaceuticals
Procedia PDF Downloads 3843176 Mathematical Modeling Pressure Losses of Trapezoidal Labyrinth Channel and Bi-Objective Optimization of the Design Parameters
Authors: Nina Philipova
Abstract:
The influence of the geometric parameters of trapezoidal labyrinth channel on the pressure losses along the labyrinth length is investigated in this work. The impact of the dentate height is studied at fixed values of the dentate angle and the dentate spacing. The objective of the work presented in this paper is to derive a mathematical model of the pressure losses along the labyrinth length depending on the dentate height. The numerical simulations of the water flow movement are performed by using Commercial codes ANSYS GAMBIT and FLUENT. Dripper inlet pressure is set up to be 1 bar. As a result, the mathematical model of the pressure losses is determined as a second-order polynomial by means Commercial code STATISTIKA. Bi-objective optimization is performed by using the mean algebraic function of utility. The optimum value of the dentate height is defined at fixed values of the dentate angle and the dentate spacing. The derived model of the pressure losses and the optimum value of the dentate height are used as a basis for a more successful emitter design.Keywords: drip irrigation, labyrinth channel hydrodynamics, numerical simulations, Reynolds stress model
Procedia PDF Downloads 1543175 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 2803174 A Study to Identify Resistant Hypertension and Role of Spironolactone in its Management
Authors: A. Kumar, D. Himanshu, Ak Vaish, K. Usman , A. Singh, R. Misra, V. Atam, S. P. Verma, S. Singhal
Abstract:
Introduction: Resistant and uncontrolled hypertension offer great challenge, in terms of higher risk of morbidity, mortality and not the least, difficulty in diagnosis and management. Our study tries to identify the importance of two crucial aspects of hypertension management, i.e. drug compliance and optimum dosing and also the effect of spironolactone on blood pressure in cases of resistant hypertension. Methodology: A prospective study was carried out among patients, who were referred as case of resistant hypertension to Hypertension Clinic at Gandhi memorial and associated hospital, Lucknow, India from August, 2013 to July 2014. A total of 122 Subjects having uncontrolled BP with ≥3 antihypertensives were selected. After ruling out secondary resistance and with appropriate lifestyle modifications, effect of adherence and optimum doses was seen with monitoring of BP. Only those having blood pressure still uncontrolled were true resistant. These patients were given spironolactone to see its effect on BP over next 12 weeks. Results: Mean baseline BP of all (n=122) patients was 150.4±7.2 mmHg systolic and 92.1±5.7 mmHg diastolic. After promoting adherence to the regimen, there was reduction of 4.20±3.65 mmHg systolic and 2.08±4.74 mmHg Diastolic blood pressure, with 26 patients achieving target blood pressure goal. Further reduction of 6.66±5.99 mmHg in systolic and 2.59±3.67 mmHg in diastolic BP was observed after optimizing the drug doses with another 66 patients achieving target blood pressure goal. Only 30 patients were true resistant hypertensive and prescribed spironolactone. Over 12 weeks, mean reduction of 20.62±3.65 mmHg in systolic and 10.08 ± 6.46 mmHg in diastolic BP was observed. Out of these 30, BP was controlled in 24 patients. Side effects observed were hyperkalemia in 2 patients and breast tenderness in 2 patients. Conclusion: Improper adherence and suboptimal regimen appear to be the important reasons for uncontrolled hypertension. By virtue of maintaining proper adherence to an optimum regimen, target BP goal can be reached in many without adding much to the regimen. Spironolactone is effective in patients with resistant hypertension, in terms of blood pressure reduction with minimal side effects.Keywords: resistant, hypertension, spironolactone, blood pressure
Procedia PDF Downloads 2783173 A Comprehensive Analysis of the Phylogenetic Signal in Ramp Sequences in 211 Vertebrates
Authors: Lauren M. McKinnon, Justin B. Miller, Michael F. Whiting, John S. K. Kauwe, Perry G. Ridge
Abstract:
Background: Ramp sequences increase translational speed and accuracy when rare, slowly-translated codons are found at the beginnings of genes. Here, the results of the first analysis of ramp sequences in a phylogenetic construct are presented. Methods: Ramp sequences were compared from 211 vertebrates (110 Mammalian and 101 non-mammalian). The presence and absence of ramp sequences were analyzed as a binary character in a parsimony and maximum likelihood framework. Additionally, ramp sequences were mapped to the Open Tree of Life taxonomy to determine the number of parallelisms and reversals that occurred, and these results were compared to what would be expected due to random chance. Lastly, aligned nucleotides in ramp sequences were compared to the rest of the sequence in order to examine possible differences in phylogenetic signal between these regions of the gene. Results: Parsimony and maximum likelihood analyses of the presence/absence of ramp sequences recovered phylogenies that are highly congruent with established phylogenies. Additionally, the retention index of ramp sequences is significantly higher than would be expected due to random chance (p-value = 0). A chi-square analysis of completely orthologous ramp sequences resulted in a p-value of approximately zero as compared to random chance. Discussion: Ramp sequences recover comparable phylogenies as other phylogenomic methods. Although not all ramp sequences appear to have a phylogenetic signal, more ramp sequences track speciation than expected by random chance. Therefore, ramp sequences may be used in conjunction with other phylogenomic approaches.Keywords: codon usage bias, phylogenetics, phylogenomics, ramp sequence
Procedia PDF Downloads 1623172 Poincare Plot for Heart Rate Variability
Authors: Mazhar B. Tayel, Eslam I. AlSaba
Abstract:
The heart is the most important part in any body organisms. It effects and affected by any factor in the body. Therefore, it is a good detector of any matter in the body. When the heart signal is non-stationary signal, therefore, it should be study its variability. So, the Heart Rate Variability (HRV) has attracted considerable attention in psychology, medicine and have become important dependent measure in psychophysiology and behavioral medicine. Quantification and interpretation of heart rate variability. However, remain complex issues are fraught with pitfalls. This paper presents one of the non-linear techniques to analyze HRV. It discusses 'What Poincare plot is?', 'How it is work?', 'its usage benefits especially in HRV', 'the limitation of Poincare cause of standard deviation SD1, SD2', and 'How overcome this limitation by using complex correlation measure (CCM)'. The CCM is most sensitive to changes in temporal structure of the Poincaré plot as compared to SD1 and SD2.Keywords: heart rate variability, chaotic system, poincare, variance, standard deviation, complex correlation measure
Procedia PDF Downloads 4003171 Signal Transduction in a Myenteric Ganglion
Authors: I. M. Salama, R. N. Miftahof
Abstract:
A functional element of the myenteric nervous plexus is a morphologically distinct ganglion. Composed of sensory, inter- and motor neurons and arranged via synapses in neuronal circuits, their task is to decipher and integrate spike coded information within the plexus into regulatory output signals. The stability of signal processing in response to a wide range of internal/external perturbations depends on the plasticity of individual neurons. Any aberrations in this inherent property may lead to instability with the development of a dynamics chaos and can be manifested as pathological conditions, such as intestinal dysrhythmia, irritable bowel syndrome. The aim of this study is to investigate patterns of signal transduction within a two-neuronal chain - a ganglion - under normal physiological and structurally altered states. The ganglion contains the primary sensory (AH-type) and motor (S-type) neurons linked through a cholinergic dendro somatic synapse. The neurons have distinguished electrophysiological characteristics including levels of the resting and threshold membrane potentials and spiking activity. These are results of ionic channel dynamics namely: Na+, K+, Ca++- activated K+, Ca++ and Cl-. Mechanical stretches of various intensities and frequencies are applied at the receptive field of the AH-neuron generate a cascade of electrochemical events along the chain. At low frequencies, ν < 0.3 Hz, neurons demonstrate strong connectivity and coherent firing. The AH-neuron shows phasic bursting with spike frequency adaptation while the S-neuron responds with tonic bursts. At high frequency, ν > 0.5 Hz, the pattern of electrical activity changes to rebound and mixed mode bursting, respectively, indicating ganglionic loss of plasticity and adaptability. A simultaneous increase in neuronal conductivity for Na+, K+ and Ca++ ions results in tonic mixed spiking of the sensory neuron and class 2 excitability of the motor neuron. Although the signal transduction along the chain remains stable the synchrony in firing pattern is not maintained and the number of discharges of the S-type neuron is significantly reduced. A concomitant increase in Ca++- activated K+ and a decrease in K+ in conductivities re-establishes weak connectivity between the two neurons and converts their firing pattern to a bistable mode. It is thus demonstrated that neuronal plasticity and adaptability have a stabilizing effect on the dynamics of signal processing in the ganglion. Functional modulations of neuronal ion channel permeability, achieved in vivo and in vitro pharmacologically, can improve connectivity between neurons. These findings are consistent with experimental electrophysiological recordings from myenteric ganglia in intestinal dysrhythmia and suggest possible pathophysiological mechanisms.Keywords: neuronal chain, signal transduction, plasticity, stability
Procedia PDF Downloads 3923170 Analysis of EEG Signals Using Wavelet Entropy and Approximate Entropy: A Case Study on Depression Patients
Authors: Subha D. Puthankattil, Paul K. Joseph
Abstract:
Analyzing brain signals of the patients suffering from the state of depression may lead to interesting observations in the signal parameters that is quite different from a normal control. The present study adopts two different methods: Time frequency domain and nonlinear method for the analysis of EEG signals acquired from depression patients and age and sex matched normal controls. The time frequency domain analysis is realized using wavelet entropy and approximate entropy is employed for the nonlinear method of analysis. The ability of the signal processing technique and the nonlinear method in differentiating the physiological aspects of the brain state are revealed using Wavelet entropy and Approximate entropy.Keywords: EEG, depression, wavelet entropy, approximate entropy, relative wavelet energy, multiresolution decomposition
Procedia PDF Downloads 3323169 Optimization of Titanium Leaching Process Using Experimental Design
Authors: Arash Rafiei, Carroll Moore
Abstract:
Leaching process as the first stage of hydrometallurgy is a multidisciplinary system including material properties, chemistry, reactor design, mechanics and fluid dynamics. Therefore, doing leaching system optimization by pure scientific methods need lots of times and expenses. In this work, a mixture of two titanium ores and one titanium slag are used for extracting titanium for leaching stage of TiO2 pigment production procedure. Optimum titanium extraction can be obtained from following strategies: i) Maximizing titanium extraction without selective digestion; and ii) Optimizing selective titanium extraction by balancing between maximum titanium extraction and minimum impurity digestion. The main difference between two strategies is due to process optimization framework. For the first strategy, the most important stage of production process is concerned as the main stage and rest of stages would be adopted with respect to the main stage. The second strategy optimizes performance of more than one stage at once. The second strategy has more technical complexity compared to the first one but it brings more economical and technical advantages for the leaching system. Obviously, each strategy has its own optimum operational zone that is not as same as the other one and the best operational zone is chosen due to complexity, economical and practical aspects of the leaching system. Experimental design has been carried out by using Taguchi method. The most important advantages of this methodology are involving different technical aspects of leaching process; minimizing the number of needed experiments as well as time and expense; and concerning the role of parameter interactions due to principles of multifactor-at-time optimization. Leaching tests have been done at batch scale on lab with appropriate control on temperature. The leaching tank geometry has been concerned as an important factor to provide comparable agitation conditions. Data analysis has been done by using reactor design and mass balancing principles. Finally, optimum zone for operational parameters are determined for each leaching strategy and discussed due to their economical and practical aspects.Keywords: titanium leaching, optimization, experimental design, performance analysis
Procedia PDF Downloads 3723168 Coordinated Interference Canceling Algorithm for Uplink Massive Multiple Input Multiple Output Systems
Authors: Messaoud Eljamai, Sami Hidouri
Abstract:
Massive multiple-input multiple-output (MIMO) is an emerging technology for new cellular networks such as 5G systems. Its principle is to use many antennas per cell in order to maximize the network's spectral efficiency. Inter-cellular interference remains a fundamental problem. The use of massive MIMO will not derogate from the rule. It improves performances only when the number of antennas is significantly greater than the number of users. This, considerably, limits the networks spectral efficiency. In this paper, a coordinated detector for an uplink massive MIMO system is proposed in order to mitigate the inter-cellular interference. The proposed scheme combines the coordinated multipoint technique with an interference-cancelling algorithm. It requires the serving cell to send their received symbols, after processing, decision and error detection, to the interfered cells via a backhaul link. Each interfered cell is capable of eliminating intercellular interferences by generating and subtracting the user’s contribution from the received signal. The resulting signal is more reliable than the original received signal. This allows the uplink massive MIMO system to improve their performances dramatically. Simulation results show that the proposed detector improves system spectral efficiency compared to classical linear detectors.Keywords: massive MIMO, COMP, interference canceling algorithm, spectral efficiency
Procedia PDF Downloads 1473167 Development of High Temperature Mo-Si-B Based In-situ Composites
Authors: Erhan Ayas, Buse Katipoğlu, Eda Metin, Rifat Yılmaz
Abstract:
The search for new materials has begun to be used even higher than the service temperature (~1150ᵒC) where nickel-based superalloys are currently used. This search should also meet the increasing demands for energy efficiency improvements. The materials studied for aerospace applications are expected to have good oxidation resistance. Mo-Si-B alloys, which have higher operating temperatures than nickel-based superalloys, are candidates for ultra-high temperature materials used in gas turbine and jet engines. Because the Moss and Mo₅SiB₂ (T2) phases exhibit high melting temperature, excellent high-temperature creep strength and oxidation resistance properties, however, low fracture toughness value at room temperature is a disadvantage for these materials, but this feature can be improved with optimum Moss phase and microstructure control. High-density value is also a problem for structural parts. For example, in turbine rotors, the higher the weight, the higher the centrifugal force, which reduces the creep life of the material. The density value of the nickel-based superalloys and the T2 phase, which is the Mo-Si-B alloy phase, is in the range of 8.6 - 9.2 g/cm³. But under these conditions, T2 phase Moss (density value 10.2 g/cm³), this value is above the density value of nickel-based superalloys. So, with some ceramic-based contributions, this value is enhanced by optimum values.Keywords: molybdenum, composites, in-situ, mmc
Procedia PDF Downloads 663166 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)
Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara
Abstract:
Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry
Procedia PDF Downloads 1753165 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 3093164 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 973163 Callus Induction, In-Vitro Plant Regeneration and Acclimatization of Lycium barbarum L. (Goji)
Authors: Rosna Mat Taha, Sakinah Abdullah, Sadegh Mohajer, Asmah Awal
Abstract:
Lycium barbarum L. (Goji) belongs to Solanaceae family and native to some areas of China. Ethnobotanical studies have shown that this plant has been consumed by the Chinese since ancient times. It has been used as medicine in providing excellent effects on cardiovascular system and cholesterol level, besides contains high antioxidant and antidiabetic properties. In the present study, some tissue culture work has been carried out to induce callus, in vitro regeneration from various explants of Goji and also some acclimatization protocols were followed to transfer the regenerated plants to soil. The main aims being to establish high efficient regeneration system for mass production and commercialization for future uses, since the growth of this species is very limited in Malaysia. The optimum hormonal regime and the most suitable and responsive explants were identified. It was found that leaves and stems gave good responses. Murashige and Skoog’s (MS) medium supplemented with 2.0 mg/L NAA and 0.5 mg/L BAP was the best for callus induction and MS media fortified with 1.0 mg/L NAA and 1.0 mg/L BAP was optimum for in vitro regeneration. The survival rates of plantlets after acclimatization was 63±1.5 % on black soil and 50±1.3 % on mixed soil (combination of black and red soil at a ratio of 2 to 1), respectively.Keywords: callus, acclimatization, in vitro culture, regeneration
Procedia PDF Downloads 4463162 Enhancement of Performance Utilizing Low Complexity Switched Beam Antenna
Authors: P. Chaipanya, R. Keawchai, W. Sombatsanongkhun, S. Jantaramporn
Abstract:
To manage the demand of wireless communication that has been dramatically increased, switched beam antenna in smart antenna system is focused. Implementation of switched beam antennas at mobile terminals such as notebook or mobile handset is a preferable choice to increase the performance of the wireless communication systems. This paper proposes the low complexity switched beam antenna using single element of antenna which is suitable to implement at mobile terminal. Main beam direction is switched by changing the positions of short circuit on the radiating patch. There are four cases of switching that provide four different directions of main beam. Moreover, the performance in terms of Signal to Interference Ratio when utilizing the proposed antenna is compared with the one using omni-directional antenna to confirm the performance improvable.Keywords: switched beam, shorted circuit, single element, signal to interference ratio
Procedia PDF Downloads 1713161 Ultrasound Assisted Cooling Crystallization of Lactose Monohydrate
Authors: Sanjaykumar R. Patel, Parth R. Kayastha
Abstract:
α-lactose monohydrate is widely used in the pharmaceutical industries as an inactive substance that acts as a vehicle or a medium for a drug or other active substance. It is a byproduct of dairy industries, and the recovery of lactose from whey not only boosts the improvement of the economics of whey utilization but also causes a reduction in pollution as lactose recovery can reduce the BOD of whey by more than 80%. In the present study, levels of process parameters were kept as initial lactose concentration (30-50% w/w), sonication amplitude (20-40%), sonication time (2-6 hours), and crystallization temperature (10-20 oC) for the recovery of lactose in ultrasound assisted cooling crystallization. In comparison with cooling crystallization, the use of ultrasound enhanced the lactose recovery by 39.17% (w/w). The parameters were optimized for the lactose recovery using Taguchi Method. The optimum conditions found were initial lactose concentration at level 3 (50% w/w), amplitude of sonication at level 2 (40%), the sonication time at level 3 (6 hours), and crystallization temperature at level 1 (10 °C). The maximum recovery was found to be 85.85% at the optimum conditions. Sonication time and the initial lactose concentration were found to be significant parameters for the lactose recovery.Keywords: crystallization, lactose, Taguchi method, ultrasound
Procedia PDF Downloads 2123160 Material Detection by Phase Shift Cavity Ring-Down Spectroscopy
Authors: Rana Muhammad Armaghan Ayaz, Yigit Uysallı, Nima Bavili, Berna Morova, Alper Kiraz
Abstract:
Traditional optical methods for example resonance wavelength shift and cavity ring-down spectroscopy used for material detection and sensing have disadvantages, for example, less resistance to laser noise, temperature fluctuations and extraction of the required information can be a difficult task like ring downtime in case of cavity ring-down spectroscopy. Phase shift cavity ring down spectroscopy is not only easy to use but is also capable of overcoming the said problems. This technique compares the phase difference between the signal coming out of the cavity with the reference signal. Detection of any material is made by the phase difference between them. By using this technique, air, water, and isopropyl alcohol can be recognized easily. This Methodology has far-reaching applications and can be used in air pollution detection, human breath analysis and many more.Keywords: materials, noise, phase shift, resonance wavelength, sensitivity, time domain approach
Procedia PDF Downloads 1493159 Optimization of Assay Parameters of L-Glutaminase from Bacillus cereus MTCC1305 Using Artificial Neural Network
Authors: P. Singh, R. M. Banik
Abstract:
Artificial neural network (ANN) was employed to optimize assay parameters viz., time, temperature, pH of reaction mixture, enzyme volume and substrate concentration of L-glutaminase from Bacillus cereus MTCC 1305. ANN model showed high value of coefficient of determination (0.9999), low value of root mean square error (0.6697) and low value of absolute average deviation. A multilayer perceptron neural network trained with an error back-propagation algorithm was incorporated for developing a predictive model and its topology was obtained as 5-3-1 after applying Levenberg Marquardt (LM) training algorithm. The predicted activity of L-glutaminase was obtained as 633.7349 U/l by considering optimum assay parameters, viz., pH of reaction mixture (7.5), reaction time (20 minutes), incubation temperature (35˚C), substrate concentration (40mM), and enzyme volume (0.5ml). The predicted data was verified by running experiment at simulated optimum assay condition and activity was obtained as 634.00 U/l. The application of ANN model for optimization of assay conditions improved the activity of L-glutaminase by 1.499 fold.Keywords: Bacillus cereus, L-glutaminase, assay parameters, artificial neural network
Procedia PDF Downloads 4293158 Effect of Physicochemical Treatments on the Characteristics of Activated Sludge
Authors: Hammadi Larbi
Abstract:
The treatment of wastewater in sewage plants usually results in the formation of a large amount of sludge. These appear at the outlet of the treatment plant as a viscous fluid loaded with a high concentration of dry matter. This sludge production presents environmental, ecological, and economic risks. That is why it is necessary to find many solutions for minimizing these risks. In the present article, the effect of hydrogen peroxide, thermal treatment, and quicklime on the characteristics of the activated sludge produced in urban wastewater plant were evaluated in order to avoid any risk in the plants. The study shows increasing of the dose of H2O2 from 0 to 0.4 g causes an increase in the solubilization rate of COD from 12% to 45% and a reduction in the organic matter content of sludge (VM/SM) from 74% to 36% . The results also show that the optimum efficiency of the heat treatment corresponds to a temperature of 80 ° C for a treatment time of 40 min is 47% and 51.82% for a temperature equal to 100 ° C and 76.30 % for a temperature of 120 ° C, and 79.38% for a temperature of 140 ° C. The treatment of sludge by quicklime gives the optimum efficiency of 70.62 %. It was shown the increasing of the temperature from 80°C to 140°C, the pH of sludge was increased from 7.12 to 9.59. The obtained results showed that with increasing the dose of quicklime from 0 g/l to 1g/l in activated sludge led to an increase of their pH from 7.12 to 12.06. The study shows the increasing the dose of quicklime from 0 g/l to 1g/l causes also an increase in the solubilization of COD from 0% to 70.62 %Keywords: activated sludge, hydrogen peroxide, thermal treatment, quicklime
Procedia PDF Downloads 1043157 Magnetohemodynamic of Blood Flow Having Impact of Radiative Flux Due to Infrared Magnetic Hyperthermia: Spectral Relaxation Approach
Authors: Ebenezer O. Ige, Funmilayo H. Oyelami, Joshua Olutayo-Irheren, Joseph T. Okunlola
Abstract:
Hyperthermia therapy is an adjuvant procedure during which perfused body tissues is subjected to elevated range of temperature in bid to achieve improved drug potency and efficacy of cancer treatment. While a selected class of hyperthermia techniques is shouldered on the thermal radiations derived from single-sourced electro-radiation measures, there are deliberations on conjugating dual radiation field sources in an attempt to improve the delivery of therapy procedure. This paper numerically explores the thermal effectiveness of combined infrared hyperemia having nanoparticle recirculation in the vicinity of imposed magnetic field on subcutaneous strata of a model lesion as ablation scheme. An elaborate Spectral relaxation method (SRM) was formulated to handle equation of coupled momentum and thermal equilibrium in the blood-perfused tissue domain of a spongy fibrous tissue. Thermal diffusion regimes in the presence of external magnetic field imposition were described leveraging on the renowned Roseland diffusion approximation to delineate the impact of radiative flux within the computational domain. The contribution of tissue sponginess was examined using mechanics of pore-scale porosity over a selected of clinical informed scenarios. Our observations showed for a substantial depth of spongy lesion, magnetic field architecture constitute the control regimes of hemodynamics in the blood-tissue interface while facilitating thermal transport across the depth of the model lesion. This parameter-indicator could be utilized to control the dispensing of hyperthermia treatment in intravenous perfused tissue.Keywords: spectra relaxation scheme, thermal equilibrium, Roseland diffusion approximation, hyperthermia therapy
Procedia PDF Downloads 1183156 Secured Embedding of Patient’s Confidential Data in Electrocardiogram Using Chaotic Maps
Authors: Butta Singh
Abstract:
This paper presents a chaotic map based approach for secured embedding of patient’s confidential data in electrocardiogram (ECG) signal. The chaotic map generates predefined locations through the use of selective control parameters. The sample value difference method effectually hides the confidential data in ECG sample pairs at these predefined locations. Evaluation of proposed method on all 48 records of MIT-BIH arrhythmia ECG database demonstrates that the embedding does not alter the diagnostic features of cover ECG. The secret data imperceptibility in stego-ECG is evident through various statistical and clinical performance measures. Statistical metrics comprise of Percentage Root Mean Square Difference (PRD) and Peak Signal to Noise Ratio (PSNR). Further, a comparative analysis between proposed method and existing approaches was also performed. The results clearly demonstrated the superiority of proposed method.Keywords: chaotic maps, ECG steganography, data embedding, electrocardiogram
Procedia PDF Downloads 1953155 Optimization of Fin Type and Fin per Inch on Heat Transfer and Pressure Drop of an Air Cooler
Authors: A. Falavand Jozaei, A. Ghafouri
Abstract:
Operation enhancement in an air cooler (heat exchanger) depends on the rate of heat transfer, and pressure drop. In this paper, for a given heat duty, study of the effects of FPI (fin per inch) and fin type (circular and hexagonal fins) on two parameters mentioned above is considered in an air cooler in Iran, Arvand petrochemical. A program in EES (Engineering Equations Solver) software moreover, Aspen B-JAC and HTFS+ software are used for this purpose to solve governing equations. At first the simulated results obtained from this program is compared to the experimental data for two cases of FPI. The effects of FPI from 3 to 15 over heat transfer (Q) to pressure drop ratio (Q/Δp ratio). This ratio is one of the main parameters in design, rating, and simulation heat exchangers. The results show that heat transfer (Q) and pressure drop increase with increasing FPI (fin per inch) steadily, and the Q/Δp ratio increases to FPI = 12 (for circular fins about 47% and for hexagonal fins about 69%) and then decreased gradually to FPI = 15 (for circular fins about 5% and for hexagonal fins about 8%), and Q/Δp ratio is maximum at FPI = 12. The FPI value selection between 8 and 12 obtained as a result to optimum heat transfer to pressure drop ratio. Also by contrast, between circular and hexagonal fins results, the Q/Δp ratio of hexagonal fins more than Q/Δp ratio of circular fins for FPI between 8 and 12 (optimum FPI).Keywords: air cooler, circular and hexagonal fins, fin per inch, heat transfer and pressure drop
Procedia PDF Downloads 4543154 Effect of Aging Time on CeO2 Nanoparticle Size Distribution Synthesized via Sol-Gel Method
Authors: Navid Zanganeh, Hafez Balavi, Farbod Sharif, Mahla Zabet, Marzieh Bakhtiary Noodeh
Abstract:
Cerium oxide (CeO2) also known as cerium dioxide or ceria is a pale yellow-white powder with various applications in the industry from wood coating to cosmetics, filtration, fuel cell electrolytes, gas sensors, hybrid solar cells and catalysts. In this research, attempts were made to synthesize and characterization of CeO2 nano-particles via sol-gel method. In addition, the effect of aging time on the size of particles was investigated. For this purpose, the aging times adjusted 48, 56, 64, and 72 min. The obtained particles were characterized by x-ray diffraction spectroscopy (XRD), scanning electron microscopy (SEM), transmitted electron microscopy (TEM), and Brunauer–Emmett–Teller (BET). As a result, XRD patterns confirmed the formation of CeO2 nanoparticles. SEM and TEM images illustrated the nano-particles with cluster shape, spherical and a nano-size range which was in agreement with XRD results. The finest particles (7.3 nm) was obtained at the optimum condition which was aging time of 48 min, calcination temperature at 400 ⁰C, and cerium concentration of 0.004 mol. Average specific surface area of the particles at optimum condition was measured by BET analysis and recorded as 47.57 m2/g.Keywords: aging time, CeO2 nanoparticles, size distribution, sol-gel
Procedia PDF Downloads 4563153 The Variable Sampling Interval Xbar Chart versus the Double Sampling Xbar Chart
Authors: Michael B. C. Khoo, J. L. Khoo, W. C. Yeong, W. L. Teoh
Abstract:
The Shewhart Xbar control chart is a useful process monitoring tool in manufacturing industries to detect the presence of assignable causes. However, it is insensitive in detecting small process shifts. To circumvent this problem, adaptive control charts are suggested. An adaptive chart enables at least one of the chart’s parameters to be adjusted to increase the chart’s sensitivity. Two common adaptive charts that exist in the literature are the double sampling (DS) Xbar and variable sampling interval (VSI) Xbar charts. This paper compares the performances of the DS and VSI Xbar charts, based on the average time to signal (ATS) criterion. The ATS profiles of the DS Xbar and VSI Xbar charts are obtained using the Mathematica and Statistical Analysis System (SAS) programs, respectively. The results show that the VSI Xbar chart is generally superior to the DS Xbar chart.Keywords: adaptive charts, average time to signal, double sampling, charts, variable sampling interval
Procedia PDF Downloads 286