Search results for: detecting of envelope modulation on noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2526

Search results for: detecting of envelope modulation on noise

666 User Authentication Using Graphical Password with Sound Signature

Authors: Devi Srinivas, K. Sindhuja

Abstract:

This paper presents architecture to improve surveillance applications based on the usage of the service oriented paradigm, with smart phones as user terminals, allowing application dynamic composition and increasing the flexibility of the system. According to the result of moving object detection research on video sequences, the movement of the people is tracked using video surveillance. The moving object is identified using the image subtraction method. The background image is subtracted from the foreground image, from that the moving object is derived. So the Background subtraction algorithm and the threshold value is calculated to find the moving image by using background subtraction algorithm the moving frame is identified. Then, by the threshold value the movement of the frame is identified and tracked. Hence, the movement of the object is identified accurately. This paper deals with low-cost intelligent mobile phone-based wireless video surveillance solution using moving object recognition technology. The proposed solution can be useful in various security systems and environmental surveillance. The fundamental rule of moving object detecting is given in the paper, then, a self-adaptive background representation that can update automatically and timely to adapt to the slow and slight changes of normal surroundings is detailed. While the subtraction of the present captured image and the background reaches a certain threshold, a moving object is measured to be in the current view, and the mobile phone will automatically notify the central control unit or the user through SMS (Short Message System). The main advantage of this system is when an unknown image is captured by the system it will alert the user automatically by sending an SMS to user’s mobile.

Keywords: security, graphical password, persuasive cued click points

Procedia PDF Downloads 524
665 Quality Assurances for an On-Board Imaging System of a Linear Accelerator: Five Months Data Analysis

Authors: Liyun Chang, Cheng-Hsiang Tsai

Abstract:

To ensure the radiation precisely delivering to the target of cancer patients, the linear accelerator equipped with the pretreatment on-board imaging system is introduced and through it the patient setup is verified before the daily treatment. New generation radiotherapy using beam-intensity modulation, usually associated the treatment with steep dose gradients, claimed to have achieved both a higher degree of dose conformation in the targets and a further reduction of toxicity in normal tissues. However, this benefit is counterproductive if the beam is delivered imprecisely. To avoid shooting critical organs or normal tissues rather than the target, it is very important to carry out the quality assurance (QA) of this on-board imaging system. The QA of the On-Board Imager® (OBI) system of one Varian Clinac-iX linear accelerator was performed through our procedures modified from a relevant report and AAPM TG142. Two image modalities, 2D radiography and 3D cone-beam computed tomography (CBCT), of the OBI system were examined. The daily and monthly QA was executed for five months in the categories of safety, geometrical accuracy and image quality. A marker phantom and a blade calibration plate were used for the QA of geometrical accuracy, while the Leeds phantom and Catphan 504 phantom were used in the QA of radiographic and CBCT image quality, respectively. The reference images were generated through a GE LightSpeed CT simulator with an ADAC Pinnacle treatment planning system. Finally, the image quality was analyzed via an OsiriX medical imaging system. For the geometrical accuracy test, the average deviations of the OBI isocenter in each direction are less than 0.6 mm with uncertainties less than 0.2 mm, while all the other items have the displacements less than 1 mm. For radiographic image quality, the spatial resolution is 1.6 lp/cm with contrasts less than 2.2%. The spatial resolution, low contrast, and HU homogenous of CBCT are larger than 6 lp/cm, less than 1% and within 20 HU, respectively. All tests are within the criteria, except the HU value of Teflon measured with the full fan mode exceeding the suggested value that could be due to itself high HU value and needed to be rechecked. The OBI system in our facility was then demonstrated to be reliable with stable image quality. The QA of OBI system is really necessary to achieve the best treatment for a patient.

Keywords: CBCT, image quality, quality assurance, OBI

Procedia PDF Downloads 288
664 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method

Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen

Abstract:

This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.

Keywords: design of experiment, Taguchi design, optimization, analysis of variance, machining parameters, horizontal boring tool

Procedia PDF Downloads 430
663 Uncontrolled Urbanization Leads to Main Challenge for Sustainable Development of Mongolia

Authors: Davaanyam Surenjav, Chinzolboo Dandarbaatar, Ganbold Batkhuyag

Abstract:

Primate city induced rapid urbanization has been become one of the main challenges in sustainable development in Mongolia like other developing countries since transition to market economy in 1990. According due to statistical yearbook, population number of Ulaanbaatar city has increased from 0.5 million to 1.5 million for last 30 years and contains now almost half (47%) of total Mongolian population. Rural-Ulaanbaatar and local Cities-Ulaanbaatar city migration leads to social issues like uncontrolled urbanization, income inequality, poverty, overwork of public service, economic over cost for redevelopment and limitation of transport and environmental degradation including air, noise, water and soil pollution. Most thresholds of all of the sustainable urban development main and sub-indicators over exceeded from safety level to unsafety level in Ulaanbaatar. So, there is an urgent need to remove migration pull factors including some administrative and high education functions from Ulaanbaatar city to its satellite cities or secondary cities. Moreover, urban smart transport system and green and renewable energy technologies should be introduced to urban development master plan of Ulaanbaatar city.

Keywords: challenge for sustainable urban development, migration factors, primate city , urban safety thresholds

Procedia PDF Downloads 119
662 A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System

Authors: Mulugeta K. Tefera, Xiaolong Yang, Jian Liu

Abstract:

Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used.

Keywords: background modeling, Gaussian mixture model, inter-frame difference, object detection and tracking, video surveillance

Procedia PDF Downloads 463
661 A Cost-Benefit Analysis of Routinely Performed Transthoracic Echocardiography in the Setting of Acute Ischemic Stroke

Authors: John Rothrock

Abstract:

Background: The role of transthoracic echocardiography (TTE) in the diagnosis and management of patients with acute ischemic stroke remains controversial. While many stroke subspecialist reserve TTE for selected patients, others consider the procedure obligatory for most or all acute stroke patients. This study was undertaken to assess the cost vs. benefit of 'routine' TTE. Methods: We examined a consecutive series of patients who were admitted to a single institution in 2019 for acute ischemic stroke and underwent TTE. We sought to determine the frequency with which the results of TTE led to a new diagnosis of cardioembolism, redirected therapeutic cerebrovascular management, and at least potentially influenced the short or long-term clinical outcome. We recorded the direct cost associated with TTE. Results: There were 1076 patients in the study group, all of whom underwent TTE. TTE identified an unsuspected source of possible/probable cardioembolism in 62 patients (6%), confirmed an initially suspected source (primarily endocarditis) in an additional 13 (1%) and produced findings that stimulated subsequent testing diagnostic of possible/probable cardioembolism in 7 patients ( < 1%). TTE results potentially influenced the clinical outcome in a total of 48 patients (4%). With a total direct cost of $1.51 million, the mean cost per case wherein TTE results potentially influenced the clinical outcome in a positive manner was $31,375. Diagnostically and therapeutically, TTE was most beneficial in 67 patients under the age of 55 who presented with 'cryptogenic' stroke, identifying patent foramen ovale in 21 (31%); closure was performed in 19. Conclusions: The utility of TTE in the setting of acute ischemic stroke is modest, with its yield greatest in younger patients with cryptogenic stroke. Given the greater sensitivity of transesophageal echocardiography in detecting PFO and evaluating the aortic arch, TTE’s role in stroke diagnosis would appear to be limited.

Keywords: cardioembolic, cost-benefit, stroke, TTE

Procedia PDF Downloads 109
660 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio

Authors: Urvee B. Trivedi, U. D. Dalal

Abstract:

As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.

Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)

Procedia PDF Downloads 337
659 Coding Structures for Seated Row Simulation of an Active Controlled Vibration Isolation and Stabilization System for Astronaut’s Exercise Platform

Authors: Ziraguen O. Williams, Shield B. Lin, Fouad N. Matari, Leslie J. Quiocho

Abstract:

Simulation for seated row exercise was a continued task to assist NASA in analyzing a one-dimensional vibration isolation and stabilization system for astronaut’s exercise platform. Feedback delay and signal noise were added to the model as previously done in simulation for squat exercise. Simulation runs for this study were conducted in two software simulation tools, Trick and MBDyn, software simulation environments developed at the NASA Johnson Space Center. The exciter force in the simulation was calculated from the motion capture of an exerciser during a seated row exercise. The simulation runs include passive control, active control using a Proportional, Integral, Derivative (PID) controller, and active control using a Piecewise Linear Integral Derivative (PWLID) controller. Output parameters include displacements of the exercise platform, the exerciser, and the counterweight; transmitted force to the wall of spacecraft; and actuator force to the platform. The simulation results showed excellent force reduction in the actively controlled system compared to the passive controlled system, which showed less force reduction.

Keywords: control, counterweight, isolation, vibration.

Procedia PDF Downloads 127
658 Time-dependent Association between Recreational Cannabinoid Use and Memory Performance in Healthy Adults: A Neuroimaging Study of Human Connectome Project

Authors: Kamyar Moradi

Abstract:

Background: There is mixed evidence regarding the association between recreational cannabinoid use and memory performance. One of the major reasons for the present controversy is different cannabinoid use-related covariates that influence the cognitive status of an individual. Adjustment of these confounding variables provides accurate insight into the real effects of cannabinoid use on memory status. In this study, we sought to investigate the association between recent recreational cannabinoid use and memory performance while correcting the model for other possible covariates such as demographic characteristics and duration, and amount of cannabinoid use. Methods: Cannabinoid users were assigned to two groups based on the results of THC urine drug screen test (THC+ group: n = 110, THC- group: n = 410). THC urine drug screen test has a high sensitivity and specificity in detecting cannabinoid use in the last 3-4 weeks. The memory domain of NIH Toolbox battery and brain MRI volumetric measures were compared between the groups while adjusting for confounding variables. Results: After Benjamini-Hochberg p-value correction, the performance in all of the measured memory outcomes, including vocabulary comprehension, episodic memory, executive function/cognitive flexibility, processing speed, reading skill, working memory, and fluid cognition, were significantly weaker in THC+ group (p values less than 0.05). Also, volume of gray matter, left supramarginal, right precuneus, right inferior/middle temporal, right hippocampus, left entorhinal, and right pars orbitalis regions were significantly smaller in THC+ group. Conclusions: this study provides evidence regarding the acute effect of recreational cannabis use on memory performance. Further studies are warranted to confirm the results.

Keywords: brain MRI, cannabis, memory, recreational use, THC urine test

Procedia PDF Downloads 184
657 A Hybrid Combustion Chamber Design for Diesel Engines

Authors: R. Gopakumar, G. Nagarajan

Abstract:

Both DI and IDI systems possess inherent advantages as well as disadvantages. The objective of the present work is to obtain maximum advantages of both systems by implementing a hybrid design. A hybrid combustion chamber design consists of two combustion chambers viz., the main combustion chamber and an auxiliary combustion chamber. A fuel injector supplies major quantity of fuel to the auxiliary chamber. Due to the increased swirl motion in auxiliary chamber, mixing becomes more efficient which contributes to reduction in soot/particulate emissions. Also, by increasing the fuel injection pressure, NOx emissions can be reduced. The main objective of the hybrid combustion chamber design is to merge the positive features of both DI and IDI combustion chamber designs, which provides increased swirl motion and improved thermal efficiency. Due to the efficient utilization of fuel, low specific fuel consumption can be ensured. This system also aids in increasing the power output for same compression ratio and injection timing as compared with the conventional combustion chamber designs. The present system also reduces heat transfer and fluid dynamic losses which are encountered in IDI diesel engines. Since the losses are reduced, overall efficiency of the engine increases. It also minimizes the combustion noise and NOx emissions in conventional DI diesel engines.

Keywords: DI, IDI, hybrid combustion, diesel engines

Procedia PDF Downloads 513
656 Geographic Information System and Dynamic Segmentation of Very High Resolution Images for the Semi-Automatic Extraction of Sandy Accumulation

Authors: A. Bensaid, T. Mostephaoui, R. Nedjai

Abstract:

A considerable area of Algerian lands is threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mecheria department generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of LANDSAT images (5, 7, and 8) of three scenes 197/37, 198/36 and 198/37 for the year 2020. As a second step, we prospect the use of geospatial techniques to monitor the progression of sand dunes on developed (urban) lands as well as on the formation of sandy accumulations (dune, dunes fields, nebkha, barkhane, etc.). For this purpose, this study made use of the semi-automatic processing method for the dynamic segmentation of images with very high spatial resolution (SENTINEL-2 and Google Earth). This study was able to demonstrate that urban lands under current conditions are located in sand transit zones that are mobilized by the winds from the northwest and southwest directions.

Keywords: land development, GIS, segmentation, remote sensing

Procedia PDF Downloads 140
655 Magnetocaloric Effect in Ho₂O₃ Nanopowder at Cryogenic Temperature

Authors: K. P. Shinde, M. V. Tien, H. Lin, H.-R. Park, S.-C.Yu, K. C. Chung, D.-H. Kim

Abstract:

Magnetic refrigeration provides an attractive alternative cooling technology due to its potential advantages such as high cooling efficiency, environmental friendliness, low noise, and compactness over the conventional cooling techniques based on gas compression. Magnetocaloric effect (MCE) occurs by changes in entropy (ΔS) and temperature (ΔT) under external magnetic fields. We have been focused on identifying materials with large MCE in two temperature regimes, not only room temperature but also at cryogenic temperature for specific technological applications, such as space science and liquefaction of hydrogen in fuel industry. To date, the commonly used materials for cryogenic refrigeration are based on hydrated salts. In the present work, we report giant MCE in rare earth Ho2O3 nanopowder at cryogenic temperature. HoN nanoparticles with average size of 30 nm were prepared by using plasma arc discharge method with gas composition of N2/H2 (80%/20%). The prepared HoN was sintered in air atmosphere at 1200 oC for 24 hrs to convert it into oxide. Structural and morphological properties were studied by XRD and SEM. XRD confirms the pure phase and cubic crystal structure of Ho2O3 without any impurity within error range. It has been discovered that Holmium oxide exhibits giant MCE at low temperature without magnetic hysteresis loss with the second-order antiferromagnetic phase transition with Néels temperature around 2 K. The maximum entropy change was found to be 25.2 J/kgK at an applied field of 6 T.

Keywords: magnetocaloric effect, Ho₂O₃, magnetic entropy change, nanopowder

Procedia PDF Downloads 138
654 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output

Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin

Abstract:

With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.

Keywords: channel estimation, LMMSE, LS, MIMO, MMSE

Procedia PDF Downloads 184
653 A Selection Approach: Discriminative Model for Nominal Attributes-Based Distance Measures

Authors: Fang Gong

Abstract:

Distance measures are an indispensable part of many instance-based learning (IBL) and machine learning (ML) algorithms. The value difference metrics (VDM) and inverted specific-class distance measure (ISCDM) are among the top-performing distance measures that address nominal attributes. VDM performs well in some domains owing to its simplicity and poorly in others that exist missing value and non-class attribute noise. ISCDM, however, typically works better than VDM on such domains. To maximize their advantages and avoid disadvantages, in this paper, a selection approach: a discriminative model for nominal attributes-based distance measures is proposed. More concretely, VDM and ISCDM are built independently on a training dataset at the training stage, and the most credible one is recorded for each training instance. At the test stage, its nearest neighbor for each test instance is primarily found by any of VDM and ISCDM and then chooses the most reliable model of its nearest neighbor to predict its class label. It is simply denoted as a discriminative distance measure (DDM). Experiments are conducted on the 34 University of California at Irvine (UCI) machine learning repository datasets, and it shows DDM retains the interpretability and simplicity of VDM and ISCDM but significantly outperforms the original VDM and ISCDM and other state-of-the-art competitors in terms of accuracy.

Keywords: distance measure, discriminative model, nominal attributes, nearest neighbor

Procedia PDF Downloads 104
652 Analysis of Composite Health Risk Indicators Built at a Regional Scale and Fine Resolution to Detect Hotspot Areas

Authors: Julien Caudeville, Muriel Ismert

Abstract:

Analyzing the relationship between environment and health has become a major preoccupation for public health as evidenced by the emergence of the French national plans for health and environment. These plans have identified the following two priorities: (1) to identify and manage geographic areas, where hotspot exposures are suspected to generate a potential hazard to human health; (2) to reduce exposure inequalities. At a regional scale and fine resolution of exposure outcome prerequisite, environmental monitoring networks are not sufficient to characterize the multidimensionality of the exposure concept. In an attempt to increase representativeness of spatial exposure assessment approaches, risk composite indicators could be built using additional available databases and theoretical framework approaches to combine factor risks. To achieve those objectives, combining data process and transfer modeling with a spatial approach is a fundamental prerequisite that implies the need to first overcome different scientific limitations: to define interest variables and indicators that could be built to associate and describe the global source-effect chain; to link and process data from different sources and different spatial supports; to develop adapted methods in order to improve spatial data representativeness and resolution. A GIS-based modeling platform for quantifying human exposure to chemical substances (PLAINE: environmental inequalities analysis platform) was used to build health risk indicators within the Lorraine region (France). Those indicators combined chemical substances (in soil, air and water) and noise risk factors. Tools have been developed using modeling, spatial analysis and geostatistic methods to build and discretize interest variables from different supports and resolutions on a 1 km2 regular grid within the Lorraine region. By example, surface soil concentrations have been estimated by developing a Kriging method able to integrate surface and point spatial supports. Then, an exposure model developed by INERIS was used to assess the transfer from soil to individual exposure through ingestion pathways. We used distance from polluted soil site to build a proxy for contaminated site. Air indicator combined modeled concentrations and estimated emissions to take in account 30 polluants in the analysis. For water, drinking water concentrations were compared to drinking water standards to build a score spatialized using a distribution unit serve map. The Lden (day-evening-night) indicator was used to map noise around road infrastructures. Aggregation of the different factor risks was made using different methodologies to discuss weighting and aggregation procedures impact on the effectiveness of risk maps to take decisions for safeguarding citizen health. Results permit to identify pollutant sources, determinants of exposure, and potential hotspots areas. A diagnostic tool was developed for stakeholders to visualize and analyze the composite indicators in an operational and accurate manner. The designed support system will be used in many applications and contexts: (1) mapping environmental disparities throughout the Lorraine region; (2) identifying vulnerable population and determinants of exposure to set priorities and target for pollution prevention, regulation and remediation; (3) providing exposure database to quantify relationships between environmental indicators and cancer mortality data provided by French Regional Health Observatories.

Keywords: health risk, environment, composite indicator, hotspot areas

Procedia PDF Downloads 239
651 Image Segmentation Using Active Contours Based on Anisotropic Diffusion

Authors: Shafiullah Soomro

Abstract:

Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.

Keywords: active contours, anisotropic diffusion, level-set, partial differential equations

Procedia PDF Downloads 150
650 Diagnostic and Prognostic Use of Kinetics of Microrna and Cardiac Biomarker in Acute Myocardial Infarction

Authors: V. Kuzhandai Velu, R. Ramesh

Abstract:

Background and objectives: Acute myocardial infarction (AMI) is the most common cause of mortality and morbidity. Over the last decade, microRNAs (miRs) have emerged as a potential marker for detecting AMI. The current study evaluates the kinetics and importance of miRs in the differential diagnosis of ST-segment elevated MI (STEMI) and non-STEMI (NSTEMI) and its correlation to conventional biomarkers and to predict the immediate outcome of AMI for arrhythmias and left ventricular (LV) dysfunction. Materials and Method: A total of 100 AMI patients were recruited for the study. Routine cardiac biomarker and miRNA levels were measured during diagnosis and serially at admission, 6, 12, 24, and 72hrs. The baseline biochemical parameters were analyzed. The expression of miRs was compared between STEMI and NSTEMI at different time intervals. Diagnostic utility of miR-1, miR-133, miR-208, and miR-499 levels were analyzed by using RT-PCR and with various diagnostics statistical tools like ROC, odds ratio, and likelihood ratio. Results: The miR-1, miR-133, and miR-499 showed peak concentration at 6 hours, whereas miR-208 showed high significant differences at all time intervals. miR-133 demonstrated the maximum area under the curve at different time intervals in the differential diagnosis of STEMI and NSTEMI which was followed by miR-499 and miR-208. Evaluation of miRs for predicting arrhythmia and LV dysfunction using admission sample demonstrated that miR-1 (OR = 8.64; LR = 1.76) and miR-208 (OR = 26.25; LR = 5.96) showed maximum odds ratio and likelihood respectively. Conclusion: Circulating miRNA showed a highly significant difference between STEMI and NSTEMI in AMI patients. The peak was much earlier than the conventional biomarkers. miR-133, miR-208, and miR-499 can be used in the differential diagnosis of STEMI and NSTEMI, whereas miR-1 and miR-208 could be used in the prediction of arrhythmia and LV dysfunction, respectively.

Keywords: myocardial infarction, cardiac biomarkers, microRNA, arrhythmia, left ventricular dysfunction

Procedia PDF Downloads 117
649 On-Chip Sensor Ellipse Distribution Method and Equivalent Mapping Technique for Real-Time Hardware Trojan Detection and Location

Authors: Longfei Wang, Selçuk Köse

Abstract:

Hardware Trojan becomes great concern as integrated circuit (IC) technology advances and not all manufacturing steps of an IC are accomplished within one company. Real-time hardware Trojan detection is proven to be a feasible way to detect randomly activated Trojans that cannot be detected at testing stage. On-chip sensors serve as a great candidate to implement real-time hardware Trojan detection, however, the optimization of on-chip sensors has not been thoroughly investigated and the location of Trojan has not been carefully explored. On-chip sensor ellipse distribution method and equivalent mapping technique are proposed based on the characteristics of on-chip power delivery network in this paper to address the optimization and distribution of on-chip sensors for real-time hardware Trojan detection as well as to estimate the location and current consumption of hardware Trojan. Simulation results verify that hardware Trojan activation can be effectively detected and the location of a hardware Trojan can be efficiently estimated with less than 5% error for a realistic power grid using our proposed methods. The proposed techniques therefore lay a solid foundation for isolation and even deactivation of hardware Trojans through accurate location of Trojans.

Keywords: hardware trojan, on-chip sensor, power distribution network, power/ground noise

Procedia PDF Downloads 380
648 Characterization and Monitoring of the Yarn Faults Using Diametric Fault System

Authors: S. M. Ishtiaque, V. K. Yadav, S. D. Joshi, J. K. Chatterjee

Abstract:

The DIAMETRIC FAULTS system has been developed that captures a bi-directional image of yarn continuously in sequentially manner and provides the detailed classification of faults. A novel mathematical framework developed on the acquired bi-directional images forms the basis of fault classification in four broad categories, namely, Thick1, Thick2, Thin and Normal Yarn. A discretised version of Radon transformation has been used to convert the bi-directional images into one-dimensional signals. Images were divided into training and test sample sets. Karhunen–Loève Transformation (KLT) basis is computed for the signals from the images in training set for each fault class taking top six highest energy eigen vectors. The fault class of the test image is identified by taking the Euclidean distance of its signal from its projection on the KLT basis for each sample realization and fault class in the training set. Euclidean distance applied using various techniques is used for classifying an unknown fault class. An accuracy of about 90% is achieved in detecting the correct fault class using the various techniques. The four broad fault classes were further sub classified in four sub groups based on the user set boundary limits for fault length and fault volume. The fault cross-sectional area and the fault length defines the total volume of fault. A distinct distribution of faults is found in terms of their volume and physical dimensions which can be used for monitoring the yarn faults. It has been shown from the configurational based characterization and classification that the spun yarn faults arising out of mass variation, exhibit distinct characteristics in terms of their contours, sizes and shapes apart from their frequency of occurrences.

Keywords: Euclidean distance, fault classification, KLT, Radon Transform

Procedia PDF Downloads 254
647 Life Cycle Assessment-Based Environmental Assessment of the Production and Maintenance of Wooden Windows

Authors: Pamela Del Rosario, Elisabetta Palumbo, Marzia Traverso

Abstract:

The building sector plays an important role in addressing pressing environmental issues such as climate change and resource scarcity. The energy performance of buildings is considerably affected by the external envelope. In fact, a considerable proportion of the building energy demand is due to energy losses through the windows. Nevertheless, according to literature, to pay attention only to the contribution of windows to the building energy performance, i.e., their influence on energy use during building operation, could result in a partial evaluation. Hence, it is important to consider not only the building energy performance but also the environmental performance of windows, and this not only during the operational stage but along its complete life cycle. Life Cycle Assessment (LCA) according to ISO 14040:2006 and ISO 14044:2006+A1:2018 is one of the most adopted and robust methods to evaluate the environmental performance of products throughout their complete life cycle. This life-cycle based approach avoids the shift of environmental impacts of a life cycle stage to another, allowing to allocate them to the stage in which they originated and to adopt measures that optimize the environmental performance of the product. Moreover, the LCA method is widely implemented in the construction sector to assess whole buildings as well as construction products and materials. LCA is regulated by the European Standards EN 15978:2011, at the building level, and EN 15804:2012+A2:2019, at the level of construction products and materials. In this work, the environmental performance of wooden windows was assessed by implementing the LCA method and adopting primary data. More specifically, the emphasis is given to embedded and operational impacts. Furthermore, correlations are made between these environmental impacts and aspects such as type of wood and window transmittance. In the particular case of the operational impacts, special attention is set on the definition of suitable maintenance scenarios that consider the potential climate influence on the environmental impacts. For this purpose, a literature review was conducted, and expert consultation was carried out. The study underlined the variability of the embedded environmental impacts of wooden windows by considering different wood types and transmittance values. The results also highlighted the need to define appropriate maintenance scenarios for precise assessment results. It was found that both the service life and the window maintenance requirements in terms of treatment and its frequency are highly dependent not only on the wood type and its treatment during the manufacturing process but also on the weather conditions of the place where the window is installed. In particular, it became evident that maintenance-related environmental impacts were the highest for climate regions with the lowest temperatures and the greatest amount of precipitation.

Keywords: embedded impacts, environmental performance, life cycle assessment, LCA, maintenance stage, operational impacts, wooden windows

Procedia PDF Downloads 219
646 Meteosat Second Generation Image Compression Based on the Radon Transform and Linear Predictive Coding: Comparison and Performance

Authors: Cherifi Mehdi, Lahdir Mourad, Ameur Soltane

Abstract:

Image compression is used to reduce the number of bits required to represent an image. The Meteosat Second Generation satellite (MSG) allows the acquisition of 12 image files every 15 minutes. Which results a large databases sizes. The transform selected in the images compression should contribute to reduce the data representing the images. The Radon transform retrieves the Radon points that represent the sum of the pixels in a given angle for each direction. Linear predictive coding (LPC) with filtering provides a good decorrelation of Radon points using a Predictor constitute by the Symmetric Nearest Neighbor filter (SNN) coefficients, which result losses during decompression. Finally, Run Length Coding (RLC) gives us a high and fixed compression ratio regardless of the input image. In this paper, a novel image compression method based on the Radon transform and linear predictive coding (LPC) for MSG images is proposed. MSG image compression based on the Radon transform and the LPC provides a good compromise between compression and quality of reconstruction. A comparison of our method with other whose two based on DCT and one on DWT bi-orthogonal filtering is evaluated to show the power of the Radon transform in its resistibility against the quantization noise and to evaluate the performance of our method. Evaluation criteria like PSNR and the compression ratio allows showing the efficiency of our method of compression.

Keywords: image compression, radon transform, linear predictive coding (LPC), run lengthcoding (RLC), meteosat second generation (MSG)

Procedia PDF Downloads 403
645 Artificial Bee Colony Optimization for SNR Maximization through Relay Selection in Underlay Cognitive Radio Networks

Authors: Babar Sultan, Kiran Sultan, Waseem Khan, Ijaz Mansoor Qureshi

Abstract:

In this paper, a novel idea for the performance enhancement of secondary network is proposed for Underlay Cognitive Radio Networks (CRNs). In Underlay CRNs, primary users (PUs) impose strict interference constraints on the secondary users (SUs). The proposed scheme is based on Artificial Bee Colony (ABC) optimization for relay selection and power allocation to handle the highlighted primary challenge of Underlay CRNs. ABC is a simple, population-based optimization algorithm which attains global optimum solution by combining local search methods (Employed and Onlooker Bees) and global search methods (Scout Bees). The proposed two-phase relay selection and power allocation algorithm aims to maximize the signal-to-noise ratio (SNR) at the destination while operating in an underlying mode. The proposed algorithm has less computational complexity and its performance is verified through simulation results for a different number of potential relays, different interference threshold levels and different transmit power thresholds for the selected relays.

Keywords: artificial bee colony, underlay spectrum sharing, cognitive radio networks, amplify-and-forward

Procedia PDF Downloads 568
644 The Effects of Electron Trapping by Electron-Ecoustic Waves Excited with Electron Beam

Authors: Abid Ali Abid

Abstract:

One-dimensional (1-D) particle-in-cell (PIC) electrostatic simulations are carried out to investigate the electrostatic waves, whose constituents are hot, cold and beam electrons in the background of motionless positive ions. In fact, the electrostatic modes excited are electron acoustic waves, beam driven waves as well as Langmuir waves. It is assessed that the relevant plasma parameters, for example, hot electron temperature, beam electron drift speed, and the electron beam density significantly modify the electrostatics wave's profiles. In the nonlinear stage, the wave-particle interaction becomes more evident and the waves have obtained its saturation level. Consequently, electrons become trapped in the waves and trapping vortices are clearly formed. Because of this trapping vortices and mixing of the electrons in phase space, finally, lead to electrons thermalization. It is observed that for the high-density value of the beam-electron, the solitary waves having a bipolar form of the electric field. These solitons are the nonlinear Brenstein-Greene and Kruskal wave mode that attributes the trapping of electrons potential well of phase-space hole. These examinations revealed that electrostatic waves have been exited in beam-plasma model and producing waves having broad-frequency ranges, which may clarify the broadband electrostatic noise (BEN) spectrum studied in the auroral zone.

Keywords: electron acoustic waves, trapping of cold electron, Langmuir waves, particle-in cell simulation

Procedia PDF Downloads 192
643 Non-Destructive Technique for Detection of Voids in the IC Package Using Terahertz-Time Domain Spectrometer

Authors: Sung-Hyeon Park, Jin-Wook Jang, Hak-Sung Kim

Abstract:

In recent years, Terahertz (THz) time-domain spectroscopy (TDS) imaging method has been received considerable interest as a promising non-destructive technique for detection of internal defects. In comparison to other non-destructive techniques such as x-ray inspection method, scanning acoustic tomograph (SAT) and microwave inspection method, THz-TDS imaging method has many advantages: First, it can measure the exact thickness and location of defects. Second, it doesn’t require the liquid couplant while it is very crucial to deliver that power of ultrasonic wave in SAT method. Third, it didn’t damage to materials and be harmful to human bodies while x-ray inspection method does. Finally, it exhibits better spatial resolution than microwave inspection method. However, this technology couldn’t be applied to IC package because THz radiation can penetrate through a wide variety of materials including polymers and ceramics except of metals. Therefore, it is difficult to detect the defects in IC package which are composed of not only epoxy and semiconductor materials but also various metals such as copper, aluminum and gold. In this work, we proposed a special method for detecting the void in the IC package using THz-TDS imaging system. The IC package specimens for this study are prepared by Packaging Engineering Team in Samsung Electronics. Our THz-TDS imaging system has a special reflection mode called pitch-catch mode which can change the incidence angle in the reflection mode from 10 o to 70 o while the others have transmission and the normal reflection mode or the reflection mode fixed at certain angle. Therefore, to find the voids in the IC package, we investigated the appropriate angle as changing the incidence angle of THz wave emitter and detector. As the results, the voids in the IC packages were successfully detected using our THz-TDS imaging system.

Keywords: terahertz, non-destructive technique, void, IC package

Procedia PDF Downloads 466
642 Design and Optimization for a Compliant Gripper with Force Regulation Mechanism

Authors: Nhat Linh Ho, Thanh-Phong Dao, Shyh-Chour Huang, Hieu Giang Le

Abstract:

This paper presents a design and optimization for a compliant gripper. The gripper is constructed based on the concept of compliant mechanism with flexure hinge. A passive force regulation mechanism is presented to control the grasping force a micro-sized object instead of using a sensor force. The force regulation mechanism is designed using the planar springs. The gripper is expected to obtain a large range of displacement to handle various sized objects. First of all, the statics and dynamics of the gripper are investigated by using the finite element analysis in ANSYS software. And then, the design parameters of the gripper are optimized via Taguchi method. An orthogonal array L9 is used to establish an experimental matrix. Subsequently, the signal to noise ratio is analyzed to find the optimal solution. Finally, the response surface methodology is employed to model the relationship between the design parameters and the output displacement of the gripper. The design of experiment method is then used to analyze the sensitivity so as to determine the effect of each parameter on the displacement. The results showed that the compliant gripper can move with a large displacement of 213.51 mm and the force regulation mechanism is expected to be used for high precision positioning systems.

Keywords: flexure hinge, compliant mechanism, compliant gripper, force regulation mechanism, Taguchi method, response surface methodology, design of experiment

Procedia PDF Downloads 315
641 Research on the Two-Way Sound Absorption Performance of Multilayer Material

Authors: Yang Song, Xiaojun Qiu

Abstract:

Multilayer materials are applied to much acoustics area. Multilayer porous materials are dominant in room absorber. Multilayer viscoelastic materials are the basic parts in underwater absorption coating. In most cases, the one-way sound absorption performance of multilayer material is concentrated according to the sound source site. But the two-way sound absorption performance is also necessary to be known in some special cases which sound is produced in both sides of the material and the both sides especially might contact with different media. In this article, this kind of case was research. The multilayer material was composed of viscoelastic layer and steel plate and the porous layer. The two sides of multilayer material contact with water and air, respectively. A theory model was given to describe the sound propagation and impedance in multilayer absorption material. The two-way sound absorption properties of several multilayer materials were calculated whose two sides all contacted with different media. The calculated results showed that the difference of two-way sound absorption coefficients is obvious. The frequency, the relation of layers thickness and parameters of multilayer materials all have an influence on the two-way sound absorption coefficients. But the degrees of influence are varied. All these simulation results were analyzed in the article. It was obtained that two-way sound absorption at different frequencies can be promoted by optimizing the configuration parameters. This work will improve the performance of underwater sound absorption coating which can absorb incident sound from the water and reduce the noise radiation from inside space.

Keywords: different media, multilayer material, sound absorption coating, two-way sound absorption

Procedia PDF Downloads 527
640 Landsat 8-TIRS NEΔT at Kīlauea Volcano and the Active East Rift Zone, Hawaii

Authors: Flora Paganelli

Abstract:

The radiometric performance of remotely sensed images is important for volcanic monitoring. The Thermal Infrared Sensor (TIRS) on-board Landsat 8 was designed with specific requirements in regard to the noise-equivalent change in temperature (NEΔT) at ≤ 0.4 K at 300 K for the two thermal infrared bands B10 and B11. This study investigated the on-orbit NEΔT of the TIRS two bands from a scene-based method using clear-sky images over the volcanic activity of Kīlauea Volcano and the active East Rift Zone (Hawaii), in order to optimize the use of TIRS data. Results showed that the NEΔTs of the two bands exceeded the design specification by an order of magnitude at 300 K. Both separate bands and split window algorithm were examined to estimate the effect of NEΔT on the land surface temperature (LST) retrieval, and NEΔT contribution to the final LST error. These results were also useful in the current efforts to assess the requirements for volcanology research campaign using the Hyperspectral Infrared Imager (HyspIRI) whose airborne prototype MODIS/ASTER instruments is plan to be flown by NASA as a single campaign to the Hawaiian Islands in support of volcanology and coastal area monitoring in 2016.

Keywords: landsat 8, radiometric performance, thermal infrared sensor (TIRS), volcanology

Procedia PDF Downloads 226
639 Visualization of Corrosion at Plate-Like Structures Based on Ultrasonic Wave Propagation Images

Authors: Aoqi Zhang, Changgil Lee Lee, Seunghee Park

Abstract:

A non-contact nondestructive technique using laser-induced ultrasonic wave generation method was applied to visualize corrosion damage at aluminum alloy plate structures. The ultrasonic waves were generated by a Nd:YAG pulse laser, and a galvanometer-based laser scanner was used to scan specific area at a target structure. At the same time, wave responses were measured at a piezoelectric sensor which was attached on the target structure. The visualization of structural damage was achieved by calculating logarithmic values of root mean square (RMS). Damage-sensitive feature was defined as the scattering characteristics of the waves that encounter corrosion damage. The corroded damage was artificially formed by hydrochloric acid. To observe the effect of the location where the corrosion was formed, the both sides of the plate were scanned with same scanning area. Also, the effect on the depth of the corrosion was considered as well as the effect on the size of the corrosion. The results indicated that the damages were successfully visualized for almost cases, whether the damages were formed at the front or back side. However, the damage could not be clearly detected because the depth of the corrosion was shallow. In the future works, it needs to develop signal processing algorithm to more clearly visualize the damage by improving signal-to-noise ratio.

Keywords: non-destructive testing, corrosion, pulsed laser scanning, ultrasonic waves, plate structure

Procedia PDF Downloads 291
638 Low Field Microwave Absorption and Magnetic Anisotropy in TM Co-Doped ZnO System

Authors: J. Das, T. S. Mahule, V. V. Srinivasu

Abstract:

Electron spin resonance (ESR) study at 9.45 GHz and a field modulation frequency of 100Hz was performed on bulk polycrystalline samples of Mn:TM (Fe/Ni) and Mn:RE (Gd/Sm) co doped ZnO samples with composition Zn1-xMn:TM/RE)xO synthesised by solid state reaction route and sintered at 500 0C temperature. The room temperature microwave absorption data collected by sweeping the DC magnetic field from -500 to 9500 G for the Mn:Fe and Mn:Ni co doped ZnO samples exhibit a rarely reported non resonant low field absorption (NRLFA) in addition to a strong absorption at around 3350G, usually associated with ferromagnetic resonance (FMR) satisfying Larmor’s relation due to absorption in the full saturation state. Observed low field absorption is distinct to ferromagnetic resonance even at low temperature and shows hysteresis. Interestingly, it shows a phase opposite with respect to the main ESR signal of the samples, which indicates that the low field absorption has a minimum value at zero magnetic field whereas the ESR signal has a maximum value. The major resonance peak as well as the peak corresponding to low field absorption exhibit asymmetric nature indicating magnetic anisotropy in the sample normally associated with intrinsic ferromagnetism. Anisotropy parameter for Mn:Ni codoped ZnO sample is noticed to be quite higher. The g values also support the presence of oxygen vacancies and clusters in the samples. These samples have shown room temperature ferromagnetism in the SQUID measurement. However, in rare earth (RE) co doped samples (Zn1-x (Mn: Gd/Sm)xO), which show paramagnetic behavior at room temperature, the low field microwave signals are not observed. As microwave currents due to itinerary electrons can lead to ohmic losses inside the sample, we speculate that more delocalized 3d electrons contributed from the TM dopants facilitate such microwave currents leading to the loss and hence absorption at the low field which is also supported by the increase in current with increased micro wave power. Besides, since Fe and Ni has intrinsic spin polarization with polarisability of around 45%, doping of Fe and Ni is expected to enhance the spin polarization related effect in ZnO. We emphasize that in this case Fe and Ni doping contribute to polarized current which interacts with the magnetization (spin) vector and get scattered giving rise to the absorption loss.

Keywords: co-doping, electron spin resonance, hysteresis, non-resonant microwave absorption

Procedia PDF Downloads 305
637 Enhancing Patch Time Series Transformer with Wavelet Transform for Improved Stock Prediction

Authors: Cheng-yu Hsieh, Bo Zhang, Ahmed Hambaba

Abstract:

Stock market prediction has long been an area of interest for both expert analysts and investors, driven by its complexity and the noisy, volatile conditions it operates under. This research examines the efficacy of combining the Patch Time Series Transformer (PatchTST) with wavelet transforms, specifically focusing on Haar and Daubechies wavelets, in forecasting the adjusted closing price of the S&P 500 index for the following day. By comparing the performance of the augmented PatchTST models with traditional predictive models such as Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and Transformers, this study highlights significant enhancements in prediction accuracy. The integration of the Daubechies wavelet with PatchTST notably excels, surpassing other configurations and conventional models in terms of Mean Absolute Error (MAE) and Mean Squared Error (MSE). The success of the PatchTST model paired with Daubechies wavelet is attributed to its superior capability in extracting detailed signal information and eliminating irrelevant noise, thus proving to be an effective approach for financial time series forecasting.

Keywords: deep learning, financial forecasting, stock market prediction, patch time series transformer, wavelet transform

Procedia PDF Downloads 31