Search results for: high resolution SAR image
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22703

Search results for: high resolution SAR image

21653 Multimodality in Storefront Windows: The Impact of Verbo-Visual Design on Consumer Behavior

Authors: Angela Bargenda, Erhard Lick, Dhoha Trabelsi

Abstract:

Research in retailing has identified the importance of atmospherics as an essential element in enhancing store image, store patronage intentions, and the overall shopping experience in a retail environment. However, in the area of atmospherics, store window design, which represents an essential component of external store atmospherics, remains a vastly underrepresented phenomenon in extant scholarship. This paper seeks to fill this gap by exploring the relevance of store window design as an atmospheric tool. In particular, empirical evidence of theme-based theatrical store front windows, which put emphasis on the use of verbo-visual design elements, was found in Paris and New York. The purpose of this study was to identify to what extent such multimodal window designs of high-end department stores in metropolitan cities have an impact on store entry decisions and attitudes towards the retailer’s image. As theoretical construct, the linguistic concept of multimodality and Mehrabian’s and Russell’s model in environmental psychology were applied. To answer the research question, two studies were conducted. For Study 1 a case study approach was selected to define three different types of store window designs based on different types of visual-verbal relations. Each of these types of store window design represented a different level of cognitive elaboration required for the decoding process. Study 2 consisted of an on-line survey carried out among more than 300 respondents to examine the influence of these three types of store window design on the consumer behavioral variables mentioned above. The results of this study show that the higher the cognitive elaboration needed to decode the message of the store window, the lower the store entry propensity. In contrast, the higher the cognitive elaboration, the higher the perceived image of the retailer’s image. One important conclusion is that in order to increase consumers’ propensity to enter stores with theme-based theatrical store front windows, retailers need to limit the cognitive elaboration required to decode their verbo-visual window design.

Keywords: consumer behavior, multimodality, store atmospherics, store window design

Procedia PDF Downloads 200
21652 An Overview of the Wind and Wave Climate in the Romanian Nearshore

Authors: Liliana Rusu

Abstract:

The goal of the proposed work is to provide a more comprehensive picture of the wind and wave climate in the Romanian nearshore, using the results provided by numerical models. The Romanian coastal environment is located in the western side of the Black Sea, the more energetic part of the sea, an area with heavy maritime traffic and various offshore operations. Information about the wind and wave climate in the Romanian waters is mainly based on observations at Gloria drilling platform (70 km from the coast). As regards the waves, the measurements of the wave characteristics are not so accurate due to the method used, being also available for a limited period. For this reason, the wave simulations that cover large temporal and spatial scales represent an option to describe better the wave climate. To assess the wind climate in the target area spanning 1992–2016, data provided by the NCEP-CFSR (U.S. National Centers for Environmental Prediction - Climate Forecast System Reanalysis) and consisting in wind fields at 10m above the sea level are used. The high spatial and temporal resolution of the wind fields is good enough to represent the wind variability over the area. For the same 25-year period, as considered for the wind climate, this study characterizes the wave climate from a wave hindcast data set that uses NCEP-CFSR winds as input for a model system SWAN (Simulating WAves Nearshore) based. The wave simulation results with a two-level modelling scale have been validated against both in situ measurements and remotely sensed data. The second level of the system, with a higher resolution in the geographical space (0.02°×0.02°), is focused on the Romanian coastal environment. The main wave parameters simulated at this level are used to analyse the wave climate. The spatial distributions of the wind speed, wind direction and the mean significant wave height have been computed as the average of the total data. As resulted from the amount of data, the target area presents a generally moderate wave climate that is affected by the storm events developed in the Black Sea basin. Both wind and wave climate presents high seasonal variability. All the results are computed as maps that help to find the more dangerous areas. A local analysis has been also employed in some key locations corresponding to highly sensitive areas, as for example the main Romanian harbors.

Keywords: numerical simulations, Romanian nearshore, waves, wind

Procedia PDF Downloads 343
21651 A Comprehensive Study and Evaluation on Image Fashion Features Extraction

Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen

Abstract:

Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.

Keywords: convolutional neural network, feature representation, image processing, machine modelling

Procedia PDF Downloads 138
21650 Efficient DCT Architectures

Authors: Mr. P. Suryaprasad, R. Lalitha

Abstract:

This paper presents an efficient area and delay architectures for the implementation of one dimensional and two dimensional discrete cosine transform (DCT). These are supported to different lengths (4, 8, 16, and 32). DCT blocks are used in the different video coding standards for the image compression. The 2D- DCT calculation is made using the 2D-DCT separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Based on the existing 1D-DCT architecture two different types of 2D-DCT architectures, folded and parallel types are implemented. Both of these two structures use the same transpose buffer. Proposed transpose buffer occupies less area and high speed than existing transpose buffer. Hence the area, low power and delay of both the 2D-DCT architectures are reduced.

Keywords: transposition buffer, video compression, discrete cosine transform, high efficiency video coding, two dimensional picture

Procedia PDF Downloads 519
21649 The Concept of Community Participation and Identified Tertiary Education Problems, Strategies and Methods

Authors: Ada Adoga James

Abstract:

This paper discussed the concept of community participation and identified tertiary education problems; strategies and methods communities could be involved to reduce conflict witnessed in our tertiary institutions of learning due to government inability to fund education. The paper pointed out that community participation through the use of Parent Teachers Association (PTA), age grade, traditional leaders, village based associations, religious and political organs could be sensitized to raise financial resources. The paper identified different sources of conflicts, the outcome of which causes prolonged academic activities, destruction of lives and properties and in some cased render school environment completely insecure for serious academic activities. It recommends involvement of community participation in assisting government, proper handling of tertiary institutions in management, and more democratic procedure in conflict resolution like cordial relationship between staff, students and trade unions in decision making process.

Keywords: community, conflict resolution, tertiary education, psychology, psychiatry

Procedia PDF Downloads 480
21648 Rethinking the Use of Online Dispute Resolution in Resolving Cross-Border Small E-Disputes in EU

Authors: Sajedeh Salehi, Marco Giacalone

Abstract:

This paper examines the role of existing online dispute resolution (ODR) mechanisms and their effects on ameliorating access to justice – as a protected right by Art. 47 of the EU Charter of Fundamental Rights – for consumers in EU. The major focus of this study will be on evaluating ODR as the means of dispute resolution for Business-to-Consumer (B2C) cross-border small claims raised in e-commerce transactions. The authors will elaborate the consequences of implementing ODR methods in the context of recent developments in EU regulatory safeguards on promoting consumer protection. In this analysis, both non-judiciary and judiciary ODR redress mechanisms are considered, however, the significant consideration is given to – obligatory and non-obligatory – judiciary ODR methods. For that purpose, this paper will particularly investigate the impact of the EU ODR platform as well as the European Small Claims Procedure (ESCP) Regulation 861/2007 and their role on accelerating the access to justice for consumers in B2C e-disputes. Although, considerable volume of research has been carried out on ODR for consumer claims, rather less (or no-) attention has been paid to provide a combined doctrinal and empirical evaluation of ODR’s potential in resolving cross-border small e-disputes, in EU. Hence, the methodological approach taken in this study is a mixed methodology based on qualitative (interviews) and quantitative (surveys) research methods which will be mainly based on the data acquired through the findings of the Small Claims Analysis Net (SCAN) project. This project contributes towards examining the ESCP Regulation implementation and efficiency in providing consumers with a legal watershed through using the ODR for their transnational small claims. The outcomes of this research may benefit both academia and policymakers at national and international level.

Keywords: access to justice, consumers, e-commerce, small e-Disputes

Procedia PDF Downloads 127
21647 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 118
21646 Comprehensive Evaluation of COVID-19 Through Chest Images

Authors: Parisa Mansour

Abstract:

The coronavirus disease 2019 (COVID-19) was discovered and rapidly spread to various countries around the world since the end of 2019. Computed tomography (CT) images have been used as an important alternative to the time-consuming RT. PCR test. However, manual segmentation of CT images alone is a major challenge as the number of suspected cases increases. Thus, accurate and automatic segmentation of COVID-19 infections is urgently needed. Because the imaging features of the COVID-19 infection are different and similar to the background, existing medical image segmentation methods cannot achieve satisfactory performance. In this work, we try to build a deep convolutional neural network adapted for the segmentation of chest CT images with COVID-19 infections. First, we maintain a large and novel chest CT image database containing 165,667 annotated chest CT images from 861 patients with confirmed COVID-19. Inspired by the observation that the boundary of an infected lung can be improved by global intensity adjustment, we introduce a feature variable block into the proposed deep CNN, which adjusts the global features of features to segment the COVID-19 infection. The proposed PV array can effectively and adaptively improve the performance of functions in different cases. We combine features of different scales by proposing a progressive atrocious space pyramid fusion scheme to deal with advanced infection regions with various aspects and shapes. We conducted experiments on data collected in China and Germany and showed that the proposed deep CNN can effectively produce impressive performance.

Keywords: chest, COVID-19, chest Image, coronavirus, CT image, chest CT

Procedia PDF Downloads 55
21645 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 304
21644 Comparison of Vessel Detection in Standard vs Ultra-WideField Retinal Images

Authors: Maher un Nisa, Ahsan Khawaja

Abstract:

Retinal imaging with Ultra-WideField (UWF) view technology has opened up new avenues in the field of retinal pathology detection. Recent developments in retinal imaging such as Optos California Imaging Device helps in acquiring high resolution images of the retina to help the Ophthalmologists in diagnosing and analyzing eye related pathologies more accurately. This paper investigates the acquired retinal details by comparing vessel detection in standard 450 color fundus images with the state of the art 2000 UWF retinal images.

Keywords: color fundus, retinal images, ultra-widefield, vessel detection

Procedia PDF Downloads 446
21643 Aerosol Radiative Forcing Over Indian Subcontinent for 2000-2021 Using Satellite Observations

Authors: Shreya Srivastava, Sushovan Ghosh, Sagnik Dey

Abstract:

Aerosols directly affect Earth’s radiation budget by scattering and absorbing incoming solar radiation and outgoing terrestrial radiation. While the uncertainty in aerosol radiative forcing (ARF) has decreased over the years, it is still higher than that of greenhouse gas forcing, particularly in the South Asian region, due to high heterogeneity in their chemical properties. Understanding the Spatio-temporal heterogeneity of aerosol composition is critical in improving climate prediction. Studies using satellite data, in-situ and aircraft measurements, and models have investigated the Spatio-temporal variability of aerosol characteristics. In this study, we have taken aerosol data from Multi-angle Imaging Spectro-Radiometer (MISR) level-2 version 23 aerosol products retrieved at 4.4 km and radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 21 years (2000-2021) over the Indian subcontinent. MISR aerosol product includes size and shapes segregated aerosol optical depth (AOD), Angstrom exponent (AE), and single scattering albedo (SSA). Additionally, 74 aerosol mixtures are included in version 23 data that is used for aerosol speciation. We have seasonally mapped aerosol optical and microphysical properties from MISR for India at quarter degrees resolution. Results show strong Spatio-temporal variability, with a constant higher value of AOD for the Indo-Gangetic Plain (IGP). The contribution of small-size particles is higher throughout the year, spatially during winter months. SSA is found to be overestimated where absorbing particles are present. The climatological map of short wave (SW) ARF at the top of the atmosphere (TOA) shows a strong cooling except in only a few places (values ranging from +2.5o to -22.5o). Cooling due to aerosols is higher in the absence of clouds. Higher negative values of ARF are found over the IGP region, given the high aerosol concentration above the region. Surface ARF values are everywhere negative for our study domain, with higher values in clear conditions. The results strongly correlate with AOD from MISR and ARF from CERES.

Keywords: aerosol Radiative forcing (ARF), aerosol composition, single scattering albedo (SSA), CERES

Procedia PDF Downloads 52
21642 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 68
21641 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 529
21640 Microarray Gene Expression Data Dimensionality Reduction Using PCA

Authors: Fuad M. Alkoot

Abstract:

Different experimental technologies such as microarray sequencing have been proposed to generate high-resolution genetic data, in order to understand the complex dynamic interactions between complex diseases and the biological system components of genes and gene products. However, the generated samples have a very large dimension reaching thousands. Therefore, hindering all attempts to design a classifier system that can identify diseases based on such data. Additionally, the high overlap in the class distributions makes the task more difficult. The data we experiment with is generated for the identification of autism. It includes 142 samples, which is small compared to the large dimension of the data. The classifier systems trained on this data yield very low classification rates that are almost equivalent to a guess. We aim at reducing the data dimension and improve it for classification. Here, we experiment with applying a multistage PCA on the genetic data to reduce its dimensionality. Results show a significant improvement in the classification rates which increases the possibility of building an automated system for autism detection.

Keywords: PCA, gene expression, dimensionality reduction, classification, autism

Procedia PDF Downloads 559
21639 Toward Subtle Change Detection and Quantification in Magnetic Resonance Neuroimaging

Authors: Mohammad Esmaeilpour

Abstract:

One of the important open problems in the field of medical image processing is detection and quantification of small changes. In this poster, we try to investigate that, how the algebraic decomposition techniques can be used for semiautomatically detecting and quantifying subtle changes in Magnetic Resonance (MR) neuroimaging volumes. We mostly focus on the low-rank values of the matrices achieved from decomposing MR image pairs during a period of time. Besides, a skillful neuroradiologist will help the algorithm to distinguish between noises and small changes.

Keywords: magnetic resonance neuroimaging, subtle change detection and quantification, algebraic decomposition, basis functions

Procedia PDF Downloads 472
21638 A Distinct Reversed-Phase High-Performance Liquid Chromatography Method for Simultaneous Quantification of Evogliptin Tartrate and Metformin HCl in Pharmaceutical Dosage Forms

Authors: Rajeshkumar Kanubhai Patel, Neha Sudhirkumar Mochi

Abstract:

A simple and accurate stability-indicating, reversed-phase high-performance liquid chromatography (RP-HPLC) method was developed and validated for the simultaneous quantitation of Evogliptin tartrate and Metformin HCl in pharmaceutical dosage forms, following ICH guidelines. Forced degradation was performed under various stress conditions including acid, base, oxidation, thermal, and photodegradation. The method utilized an Eclipse C18 column (250 mm × 4.6 mm, 5 µm) with a mobile phase of 5 mM 1-hexane sulfonic acid sodium salt in water and 0.2% v/v TEA (45:55 %v/v), adjusted to pH 3.0 with OPA, at a flow rate of 1.0 mL/min. Detection at 254.4 nm using a PDA detector showed good resolution of degradation products and both drugs. Linearity was observed within 1-5 µg/mL for Evogliptin tartrate and 100-500 µg/mL for Metformin HCl, with % recovery between 99-100% and precision within acceptable limits (%RSD < 2%). The method proved to be specific, precise, accurate, and robust for routine analysis of these drugs.

Keywords: stability indicating RP-HPLC, evogliptin tartrate, metformin HCl, validation

Procedia PDF Downloads 22
21637 Juridically Secure Trade Mechanisms for Alternative Dispute Resolution in Transnational Business Negotiations

Authors: Linda Frazer

Abstract:

A pluralistic methodology focuses on promoting an understanding that an alternative juridical framework for the regulation of transnational business negotiations (TBN) between private business parties is fundamentally required. This paper deals with the evolving assessment of the doctoral research of the author which demonstrated that due to insufficient juridical tools, negotiations are commonly misunderstood within the complexity of pluralistic and conflicting legal regimes. This inadequacy causes uncertainty in the enforcement of legal remedies, leaving business parties surprised. Consequently, parties cannot sufficiently anticipate when and how legal rights and obligations are created, often counting on oral or incomplete agreements which may lead to the misinterpretation of the extent of their legal rights and obligations. This uncertainty causes threats to business parties for fear of creating unintended legal obligations or, conversely, that law will not enforce intended agreements for failure to pass the tests of contractual validity. A need to find a manner to set default standards of communications and standards of conduct to monitor our evolving global trade would aid law to provide the security, predictability and foreseeability during alternative dispute resolution required by TBN parties. The conclusion of this study includes a proposal of new trade mechanisms, termed 'Bills of Negotiations' (BON) to enhance party autonomy and promote the ability for TBN parties to self-regulate within the boundaries of law. BON will be guided by a secure juridical institutionalized setting that caters to guiding communications during TBN and resolving disputes that arise along the negotiation processes on a fast track basis.

Keywords: alternative resolution disputes, ADR, good faith, good faith, juridical security, legal regulation, trade mechanisms, transnational business negotiations

Procedia PDF Downloads 142
21636 Scar Removal Stretegy for Fingerprint Using Diffusion

Authors: Mohammad A. U. Khan, Tariq M. Khan, Yinan Kong

Abstract:

Fingerprint image enhancement is one of the most important step in an automatic fingerprint identification recognition (AFIS) system which directly affects the overall efficiency of AFIS. The conventional fingerprint enhancement like Gabor and Anisotropic filters do fill the gaps in ridge lines but they fail to tackle scar lines. To deal with this problem we are proposing a method for enhancing the ridges and valleys with scar so that true minutia points can be extracted with accuracy. Our results have shown an improved performance in terms of enhancement.

Keywords: fingerprint image enhancement, removing noise, coherence, enhanced diffusion

Procedia PDF Downloads 513
21635 Geomorphology and Flood Analysis Using Light Detection and Ranging

Authors: George R. Puno, Eric N. Bruno

Abstract:

The natural landscape of the Philippine archipelago plus the current realities of climate change make the country vulnerable to flood hazards. Flooding becomes the recurring natural disaster in the country resulting to lose of lives and properties. Musimusi is among the rivers which exhibited inundation particularly at the inhabited floodplain portion of its watershed. During the event, rescue operations and distribution of relief goods become a problem due to lack of high resolution flood maps to aid local government unit identify the most affected areas. In the attempt of minimizing impact of flooding, hydrologic modelling with high resolution mapping is becoming more challenging and important. This study focused on the analysis of flood extent as a function of different geomorphologic characteristics of Musimusi watershed. The methods include the delineation of morphometric parameters in the Musimusi watershed using Geographic Information System (GIS) and geometric calculations tools. Digital Terrain Model (DTM) as one of the derivatives of Light Detection and Ranging (LiDAR) technology was used to determine the extent of river inundation involving the application of Hydrologic Engineering Center-River Analysis System (HEC-RAS) and Hydrology Modelling System (HEC-HMS) models. The digital elevation model (DEM) from synthetic Aperture Radar (SAR) was used to delineate watershed boundary and river network. Datasets like mean sea level, river cross section, river stage, discharge and rainfall were also used as input parameters. Curve number (CN), vegetation, and soil properties were calibrated based on the existing condition of the site. Results showed that the drainage density value of the watershed is low which indicates that the basin is highly permeable subsoil and thick vegetative cover. The watershed’s elongation ratio value of 0.9 implies that the floodplain portion of the watershed is susceptible to flooding. The bifurcation ratio value of 2.1 indicates higher risk of flooding in localized areas of the watershed. The circularity ratio value (1.20) indicates that the basin is circular in shape, high discharge of runoff and low permeability of the subsoil condition. The heavy rainfall of 167 mm brought by Typhoon Seniang last December 29, 2014 was characterized as high intensity and long duration, with a return period of 100 years produced 316 m3s-1 outflows. Portion of the floodplain zone (1.52%) suffered inundation with 2.76 m depth at the maximum. The information generated in this study is helpful to the local disaster risk reduction management council in monitoring the affected sites for more appropriate decisions so that cost of rescue operations and relief goods distribution is minimized.

Keywords: flooding, geomorphology, mapping, watershed

Procedia PDF Downloads 229
21634 Applying Semi-Automatic Digital Aerial Survey Technology and Canopy Characters Classification for Surface Vegetation Interpretation of Archaeological Sites

Authors: Yung-Chung Chuang

Abstract:

The cultural layers of archaeological sites are mainly affected by surface land use, land cover, and root system of surface vegetation. For this reason, continuous monitoring of land use and land cover change is important for archaeological sites protection and management. However, in actual operation, on-site investigation and orthogonal photograph interpretation require a lot of time and manpower. For this reason, it is necessary to perform a good alternative for surface vegetation survey in an automated or semi-automated manner. In this study, we applied semi-automatic digital aerial survey technology and canopy characters classification with very high-resolution aerial photographs for surface vegetation interpretation of archaeological sites. The main idea is based on different landscape or forest type can easily be distinguished with canopy characters (e.g., specific texture distribution, shadow effects and gap characters) extracted by semi-automatic image classification. A novel methodology to classify the shape of canopy characters using landscape indices and multivariate statistics was also proposed. Non-hierarchical cluster analysis was used to assess the optimal number of canopy character clusters and canonical discriminant analysis was used to generate the discriminant functions for canopy character classification (seven categories). Therefore, people could easily predict the forest type and vegetation land cover by corresponding to the specific canopy character category. The results showed that the semi-automatic classification could effectively extract the canopy characters of forest and vegetation land cover. As for forest type and vegetation type prediction, the average prediction accuracy reached 80.3%~91.7% with different sizes of test frame. It represented this technology is useful for archaeological site survey, and can improve the classification efficiency and data update rate.

Keywords: digital aerial survey, canopy characters classification, archaeological sites, multivariate statistics

Procedia PDF Downloads 140
21633 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 313
21632 Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform

Authors: A. Moawad, K. C. Yao, A. Mansour, R. Gautier

Abstract:

The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection.

Keywords: channel estimation, cognitive radio, scattering transform, spectrum sensing

Procedia PDF Downloads 194
21631 3D Microscopy, Image Processing, and Analysis of Lymphangiogenesis in Biological Models

Authors: Thomas Louis, Irina Primac, Florent Morfoisse, Tania Durre, Silvia Blacher, Agnes Noel

Abstract:

In vitro and in vivo lymphangiogenesis assays are essential for the identification of potential lymphangiogenic agents and the screening of pharmacological inhibitors. In the present study, we analyse three biological models: in vitro lymphatic endothelial cell spheroids, in vivo ear sponge assay, and in vivo lymph node colonisation by tumour cells. These assays provide suitable 3D models to test pro- and anti-lymphangiogenic factors or drugs. 3D images were acquired by confocal laser scanning and light sheet fluorescence microscopy. Virtual scan microscopy followed by 3D reconstruction by image aligning methods was also used to obtain 3D images of whole large sponge and ganglion samples. 3D reconstruction, image segmentation, skeletonisation, and other image processing algorithms are described. Fixed and time-lapse imaging techniques are used to analyse lymphatic endothelial cell spheroids behaviour. The study of cell spatial distribution in spheroid models enables to detect interactions between cells and to identify invasion hierarchy and guidance patterns. Global measurements such as volume, length, and density of lymphatic vessels are measured in both in vivo models. Branching density and tortuosity evaluation are also proposed to determine structure complexity. Those properties combined with vessel spatial distribution are evaluated in order to determine lymphangiogenesis extent. Lymphatic endothelial cell invasion and lymphangiogenesis were evaluated under various experimental conditions. The comparison of these conditions enables to identify lymphangiogenic agents and to better comprehend their roles in the lymphangiogenesis process. The proposed methodology is validated by its application on the three presented models.

Keywords: 3D image segmentation, 3D image skeletonisation, cell invasion, confocal microscopy, ear sponges, light sheet microscopy, lymph nodes, lymphangiogenesis, spheroids

Procedia PDF Downloads 375
21630 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids

Authors: Ayalew Yimam Ali

Abstract:

The Y-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the Y-junction microchannel can be a difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the Y-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the Y-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.

Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement

Procedia PDF Downloads 20
21629 A Study of Common Carotid Artery Behavior from B-Mode Ultrasound Image for Different Gender and BMI Categories

Authors: Nabilah Ibrahim, Khaliza Musa

Abstract:

The increment thickness of intima-media thickness (IMT) which involves the changes of diameter of the carotid artery is one of the early symptoms of the atherosclerosis lesion. The manual measurement of arterial diameter is time consuming and lack of reproducibility. Thus, this study reports the automatic approach to find the arterial diameter behavior for different gender, and body mass index (BMI) categories, focus on tracked region. BMI category is divided into underweight, normal, and overweight categories. Canny edge detection is employed to the B-mode image to extract the important information to be deal as the carotid wall boundary. The result shows the significant difference of arterial diameter between male and female groups which is 2.5% difference. In addition, the significant result of differences of arterial diameter for BMI category is the decreasing of arterial diameter proportional to the BMI.

Keywords: B-mode Ultrasound Image, carotid artery diameter, canny edge detection, body mass index

Procedia PDF Downloads 440
21628 Adversarial Attacks and Defenses on Deep Neural Networks

Authors: Jonathan Sohn

Abstract:

Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.

Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning

Procedia PDF Downloads 193
21627 Normalized Compression Distance Based Scene Alteration Analysis of a Video

Authors: Lakshay Kharbanda, Aabhas Chauhan

Abstract:

In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.

Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error

Procedia PDF Downloads 338
21626 Anomalous Behaviors of Visible Luminescence from Graphene Quantum Dots

Authors: Hyunho Shin, Jaekwang Jung, Jeongho Park, Sungwon Hwang

Abstract:

For the application of graphene quantum dots (GQDs) to optoelectronic nanodevices, it is of critical importance to understand the mechanisms which result in novel phenomena of their light absorption/emission. The optical transitions are known to be available up to ~6 eV in GQDs, especially useful for ultraviolet (UV) photodetectors (PDs). Here, we present size-dependent shape/edge-state variations of GQDs and visible photoluminescence (PL) showing anomalous size dependencies. With varying the average size (da) of GQDs from 5 to 35 nm, the peak energy of the absorption spectra monotonically decreases, while that of the visible PL spectra unusually shows nonmonotonic behaviors having a minimum at diameter ∼17 nm. The PL behaviors can be attributed to the novel feature of GQDs, that is, the circular-to-polygonal-shape and corresponding edge-state variations of GQDs at diameter ∼17 nm as the GQD size increases, as demonstrated by high resolution transmission electron microscopy. We believe that such a comprehensive scheme in designing device architecture and the structural formulation of GQDs provides a device for practical realization of environmentally benign, high performance flexible devices in the future.

Keywords: graphene, quantum dot, size, photoluminescence

Procedia PDF Downloads 293
21625 Quality Analysis of Vegetables Through Image Processing

Authors: Abdul Khalique Baloch, Ali Okatan

Abstract:

The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.

Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria

Procedia PDF Downloads 68
21624 'Low Electronic Noise' Detector Technology in Computed Tomography

Authors: A. Ikhlef

Abstract:

Image noise in computed tomography, is mainly caused by the statistical noise, system noise reconstruction algorithm filters. Since last few years, low dose x-ray imaging became more and more desired and looked as a technical differentiating technology among CT manufacturers. In order to achieve this goal, several technologies and techniques are being investigated, including both hardware (integrated electronics and photon counting) and software (artificial intelligence and machine learning) based solutions. From a hardware point of view, electronic noise could indeed be a potential driver for low and ultra-low dose imaging. We demonstrated that the reduction or elimination of this term could lead to a reduction of dose without affecting image quality. Also, in this study, we will show that we can achieve this goal using conventional electronics (low cost and affordable technology), designed carefully and optimized for maximum detective quantum efficiency. We have conducted the tests using large imaging objects such as 30 cm water and 43 cm polyethylene phantoms. We compared the image quality with conventional imaging protocols with radiation as low as 10 mAs (<< 1 mGy). Clinical validation of such results has been performed as well.

Keywords: computed tomography, electronic noise, scintillation detector, x-ray detector

Procedia PDF Downloads 121