Search results for: Fourier transform infrared
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2121

Search results for: Fourier transform infrared

411 Terahertz Glucose Sensors Based on Photonic Crystal Pillar Array

Authors: S. S. Sree Sanker, K. N. Madhusoodanan

Abstract:

Optical biosensors are dominant alternative for traditional analytical methods, because of their small size, simple design and high sensitivity. Photonic sensing method is one of the recent advancing technology for biosensors. It measures the change in refractive index which is induced by the difference in molecular interactions due to the change in concentration of the analyte. Glucose is an aldosic monosaccharide, which is a metabolic source in many of the organisms. The terahertz waves occupies the space between infrared and microwaves in the electromagnetic spectrum. Terahertz waves are expected to be applied to various types of sensors for detecting harmful substances in blood, cancer cells in skin and micro bacteria in vegetables. We have designed glucose sensors using silicon based 1D and 2D photonic crystal pillar arrays in terahertz frequency range. 1D photonic crystal has rectangular pillars with height 100 µm, length 1600 µm and width 50 µm. The array period of the crystal is 500 µm. 2D photonic crystal has 5×5 cylindrical pillar array with an array period of 75 µm. Height and diameter of the pillar array are 160 µm and 100 µm respectively. Two samples considered in the work are blood and glucose solution, which are labelled as sample 1 and sample 2 respectively. The proposed sensor detects the concentration of glucose in the samples from 0 to 100 mg/dL. For this, the crystal was irradiated with 0.3 to 3 THz waves. By analyzing the obtained S parameter, the refractive index of the crystal corresponding to the particular concentration of glucose was measured using the parameter retrieval method. Refractive indices of the two crystals decreased gradually with the increase in concentration of glucose in the sample. For 1D photonic crystals, a gradual decrease in refractive index was observed at 1 THz. 2D photonic crystal showed this behavior at 2 THz. The proposed sensor was simulated using CST Microwave studio. This will enable us to develop a model which can be used to characterize a glucose sensor. The present study is expected to contribute to blood glucose monitoring.

Keywords: CST microwave studio, glucose sensor, photonic crystal, terahertz waves

Procedia PDF Downloads 254
410 Sensory Gap Analysis on Port Wine Promotion and Perceptions

Authors: José Manue Carvalho Vieira, Mariana Magalhães, Elizabeth Serra

Abstract:

The Port Wine industry is essential to Portugal because it carries a tangible cultural heritage and for social and economic reasons. Positioned as a luxury product, brands need to pay more attention to the new generation's habits, preferences, languages, and sensory perceptions. Healthy lifestyles, anti-alcohol campaigns, and digitalisation of their buying decision process need to be better understood to understand the wine market in the future. The purpose of this study is to clarify the sensory perception gap between Port Wine descriptors promotion and the new generation's perceptions to help wineries to align their strategies. Based on the interpretivist approach - multiple methods and techniques (mixed-methods), different world views and different assumptions, and different data collection methods and analysis, this research integrated qualitative semi-structured interviews, Port Wine promotion contents, and social media perceptions mined by Sentiment Analysis Enginius algorithm. Findings confirm that Port Wine CEOs' strategies, brands' promotional content, and social perceptions are not sufficiently aligned. The central insight for Port Wine brands' managers is that there is a long and continuous work of understanding and associating their descriptors with the most relevant perceptual values and criteria of their targets to reposition (when necessary) and sustainably revitalise their brands. Finally, this study hypothesised a sensory gap that leads to a decrease in consumption, trying to find recommendations on how to transform it into an advantage for a better attraction towards the young age group (18-25).

Keywords: port wine, consumer habits, sensory gap analysis, wine marketing

Procedia PDF Downloads 202
409 Neural Graph Matching for Modification Similarity Applied to Electronic Document Comparison

Authors: Po-Fang Hsu, Chiching Wei

Abstract:

In this paper, we present a novel neural graph matching approach applied to document comparison. Document comparison is a common task in the legal and financial industries. In some cases, the most important differences may be the addition or omission of words, sentences, clauses, or paragraphs. However, it is a challenging task without recording or tracing the whole edited process. Under many temporal uncertainties, we explore the potentiality of our approach to proximate the accurate comparison to make sure which element blocks have a relation of edition with others. In the beginning, we apply a document layout analysis that combines traditional and modern technics to segment layouts in blocks of various types appropriately. Then we transform this issue into a problem of layout graph matching with textual awareness. Regarding graph matching, it is a long-studied problem with a broad range of applications. However, different from previous works focusing on visual images or structural layout, we also bring textual features into our model for adapting this domain. Specifically, based on the electronic document, we introduce an encoder to deal with the visual presentation decoding from PDF. Additionally, because the modifications can cause the inconsistency of document layout analysis between modified documents and the blocks can be merged and split, Sinkhorn divergence is adopted in our neural graph approach, which tries to overcome both these issues with many-to-many block matching. We demonstrate this on two categories of layouts, as follows., legal agreement and scientific articles, collected from our real-case datasets.

Keywords: document comparison, graph matching, graph neural network, modification similarity, multi-modal

Procedia PDF Downloads 150
408 Analysis of Ionospheric Variations over Japan during 23rd Solar Cycle Using Wavelet Techniques

Authors: C. S. Seema, P. R. Prince

Abstract:

The characterization of spatio-temporal inhomogeneities occurring in the ionospheric F₂ layer is remarkable since these variations are direct consequences of electrodynamical coupling between magnetosphere and solar events. The temporal and spatial variations of the F₂ layer, which occur with a period of several days or even years, mainly owe to geomagnetic and meteorological activities. The hourly F₂ layer critical frequency (foF2) over 23rd solar cycle (1996-2008) of three ionosonde stations (Wakkanai, Kokunbunji, and Okinawa) in northern hemisphere, which falls within same longitudinal span, is analyzed using continuous wavelet techniques. Morlet wavelet is used to transform continuous time series data of foF2 to a two dimensional time-frequency space, quantifying the time evolution of the oscillatory modes. The presence of significant time patterns (periodicities) at a particular time period and the time location of each periodicity are detected from the two-dimensional representation of the wavelet power, in the plane of scale and period of the time series. The mean strength of each periodicity over the entire period of analysis is studied using global wavelet spectrum. The quasi biennial, annual, semiannual, 27 day, diurnal and 12 hour variations of foF2 are clearly evident in the wavelet power spectra in all the three stations. Critical frequency oscillations with multi-day periods (2-3 days and 9 days in the low latitude station, 6-7 days in all stations and 15 days in mid-high latitude station) are also superimposed over large time scaled variations.

Keywords: continuous wavelet analysis, critical frequency, ionosphere, solar cycle

Procedia PDF Downloads 187
407 A Mathematical Study of Magnetic Field, Heat Transfer and Brownian Motion of Nanofluid over a Nonlinear Stretching Sheet

Authors: Madhu Aneja, Sapna Sharma

Abstract:

Thermal conductivity of ordinary heat transfer fluids is not adequate to meet today’s cooling rate requirements. Nanoparticles have been shown to increase the thermal conductivity and convective heat transfer to the base fluids. One of the possible mechanisms for anomalous increase in the thermal conductivity of nanofluids is the Brownian motions of the nanoparticles in the basefluid. In this paper, the natural convection of incompressible nanofluid over a nonlinear stretching sheet in the presence of magnetic field is studied. The flow and heat transfer induced by stretching sheets is important in the study of extrusion processes and is a subject of considerable interest in the contemporary literature. Appropriate similarity variables are used to transform the governing nonlinear partial differential equations to a system of nonlinear ordinary (similarity) differential equations. For computational purpose, Finite Element Method is used. The effective thermal conductivity and viscosity of nanofluid are calculated by KKL (Koo – Klienstreuer – Li) correlation. In this model effect of Brownian motion on thermal conductivity is considered. The effect of important parameter i.e. nonlinear parameter, volume fraction, Hartmann number, heat source parameter is studied on velocity and temperature. Skin friction and heat transfer coefficients are also calculated for concerned parameters.

Keywords: Brownian motion, convection, finite element method, magnetic field, nanofluid, stretching sheet

Procedia PDF Downloads 181
406 The Development of Solar Cells to Maximize the Utilization of Solar Energy in Al-Baha Area

Authors: Mohammed Ahmed Alghamdi, Hazem Mahmoud Ali Darwish, Mostafa Mohamed Abdelraheem

Abstract:

Transparent conducting oxides (TCOs) possess low resistivity, exhibit good adherence to many substrates, and have good transmission characteristics from the visible to near-infrared wavelengths, which make it useful for various applications. Thin films of transparent conducting oxide (TCO’s) have received much attention because of their wide applications in the field of optoelectronic devices. Advancement of transparent conducting oxides TCO’s may not only lie within the improvement of existing materials in use, but also the development of novel materials. Solar cells are devices, which convert solar energy into electricity, either directly via the photovoltaic effect, or indirectly by first converting the solar energy to heat or chemical energy. Solar power has attracted attention of late as the most advanced of the alternative energy resources. The project aims to access the solar energy in Al-Baha region by search for materials (transparent-conductive oxides (TCO's)) to use in solar cells with highly transparent to the solar spectrum, have low electrical resistivity, be stable under H-plasma, and have a suitable structure in particular for a-Si solar cells. As the PV surface is exposed to the sunlight, the module temperature increases. High ambient temperatures along with long sunlight exposure time increases the temperature impact on PV cells efficiency. Since Al-Baha area is characterized by an atmosphere and pressure different from their counterparts in Saudi Arabia due to the height above sea level, hence it is appropriate to do studies to improve the efficiency of solar cells under these conditions. In this work, some ion change materials will be deposited using either sputtering/ or electron beam evaporation techniques. The optical properties of the synthesized materials will be studied in details for solar cell application. As we will study the effect of some dyes on the optical properties of the prepared films. The efficiency and other parameters of solar cell will be determined.

Keywords: thin films, solar cell, optical properties, electrical properties

Procedia PDF Downloads 437
405 Intraoperative ICG-NIR Fluorescence Angiography Visualization of Intestinal Perfusion in Primary Pull-Through for Hirschsprung Disease

Authors: Mohammad Emran, Colton Wayne, Shannon M Koehler, P. Stephen Almond, Haroon Patel

Abstract:

Purpose: Assessment of anastomotic perfusion in Hirschsprung disease using Indocyanine Green (ICG)-near-infrared (NIR) fluorescence angiography. Introduction: Anastomotic stricture and leak are well-known complications of Hirschsprung pull-through procedures. Complications are due to tension, infection, and/or poor perfusion. While a surgeon can visually determine and control the amount of tension and contamination, assessment of perfusion is subject to surgeon determination. Intraoperative use of ICG-NIR enhances this decision-making process by illustrating perfusion intensity and adequacy in the pulled-through bowel segment. This technique, proven to reduce anastomotic stricture and leak in adults, has not been studied in children to our knowledge. ICG, an FDA approved, nontoxic, non-immunogenic, intravascular (IV) dye, has been used in adults and children for over 60 years, with few side effects. ICG-NIR was used in this report to demonstrate the adequacy of perfusion during transanal pullthrough for Hirschsprung’s disease. Method: 8 patients with Hirschsprung disease were evaluated with ICG-NIR technology. Levels of affected area ranged from sigmoid to total colonic Hirschsprung disease. After leveling, but prior to anastomosis, ICG was administered at 1.25 mg (< 2 mg/kg) and perfusion visualized using an NIR camera, before and during anastomosis. Video and photo imaging was performed and perfusion of the bowel was compared to surrounding tissues. This showed the degree of perfusion and demarcation of perfused and non-perfused bowel. The anastomosis was completed uneventfully and the patients all did well. Results: There were no complications of stricture or leak. 5 of 8 patients (62.5%) had modification of the plan based on ICG-NIR imaging. Conclusion: Technologies that enhance surgeons’ ability to visualize bowel perfusion prior to anastomosis in Hirschsprung’s patients may help reduce post-operative complications. Further studies are needed to assess the potential benefits.

Keywords: colonic anastomosis, fluorescence angiography, Hirschsprung disease, pediatric surgery, SPY

Procedia PDF Downloads 105
404 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation

Procedia PDF Downloads 108
403 Transforming ESL Teaching and Learning with ICT

Authors: Helena Sit

Abstract:

Developing skills in using ICT in the language classroom has been discussed at all educational levels. Digital tools and learning management systems enable teachers to transform their instructional activities while giving learners the opportunity to engage with virtual communities. In the field of English as a second language (ESL) teaching and learning, the use of technology-enhanced learning and diverse pedagogical practices continues to grow. Whilst technology and multimodal learning is a way of the future for education, second language teachers now face the predicament as to whether implementing these newer ways of learning is, in fact, beneficial or disadvantageous to learners. Research has shown that integrating multimodality and technology can improve students’ engagement and participation in their English language learning. However, students can experience anxiety or misunderstanding when engaging with E-learning or digital-mediated learning. This paper aims to explore how ESL teaching and learning are transformed via the use of educational technology and what impact it has had on student teachers. Case study is employed in this research. The study reviews the growing presence of technology and multimodality in university language classrooms, discusses their impact on teachers’ pedagogical practices, and proposes scaffolding strategies to help design effective English language courses in the Australian education context. The study sheds light on how pedagogical integration today may offer a way forward for language teachers of tomorrow and provides implications to implement an evidence-informed approach that blends knowledge from research, practice and people experiencing the practice in the digital era.

Keywords: educational technology, ICT in higher education, curriculum design and innovation, teacher education, multiliteracies pedagogy

Procedia PDF Downloads 37
402 Counter-Urbanisation and Digital Nomads: Connections of the Two Phenomena and Infrastructure in Greece

Authors: Dimitrios Orfanos, Yannis Maniatis, Alcestis Rodi

Abstract:

The overconcentration of people in big cities (namely Athens and Thessaloniki) and the tendency to increase their density in the upcoming years cause various problems on a personal, environmental, and social level. During the COVID-19 pandemic, a reversal in urbanism was observed. The counter-urbanization that took place, along with the steady growth of the digital nomad lifestyle, opens up new paths for policies that rejuvenate the non-urban regions in Greece and elsewhere. Promoting actions, either through incentives or through creating organized structures, can transform the Greek rural regions into attractive destinations for those who want to avoid life in big cities permanently or for a short period of time. Subsequently, the gain of the regions that will apply such policies will have a multiplier effect. Greece, being a country with great touristic interest from foreigners, can use the infusion of long-stay visitors as a boost to give way to the Greek urban population that works remotely to move permanently to more rural regions and create the conditions for growth in those regions. The paper studies several cases of such policies, in combination with different options to be explored as to the methods that can be used to take better advantage of these policies. Examples from European and worldwide use cases are presented, noting the parts that can be applied in a country like Greece. An example of an abandoned village is also presented that can be revived through the methods described in the paper. The next possible step in research could be a case study in one of the various locations to determine the level of maturity of the market to pursue such actions.

Keywords: counter-urbanization, digital nomads, rural growth, village revival

Procedia PDF Downloads 56
401 Passively Q-Switched 914 nm Microchip Laser for LIDAR Systems

Authors: Marco Naegele, Klaus Stoppel, Thomas Dekorsy

Abstract:

Passively Q-switched microchip lasers enable the great potential for sophisticated LiDAR systems due to their compact overall system design, excellent beam quality, and scalable pulse energies. However, many near-infrared solid-state lasers show emitting wavelengths > 1000 nm, which are not compatible with state-of-the-art silicon detectors. Here we demonstrate a passively Q-switched microchip laser operating at 914 nm. The microchip laser consists of a 3 mm long Nd:YVO₄ crystal as a gain medium, while Cr⁴⁺:YAG with an initial transmission of 98% is used as a saturable absorber. Quasi-continuous pumping enables single pulse operation, and low duty cycles ensure low overall heat generation and power consumption. Thus, thermally induced instabilities are minimized, and operation without active cooling is possible while ambient temperature changes are compensated by adjustment of the pump laser current only. Single-emitter diode pumping at 808 nm leads to a compact overall system design and robust setup. Utilization of a microchip cavity approach ensures single-longitudinal mode operation with spectral bandwidths in the picometer regime and results in short laser pulses with pulse durations below 10 ns. Beam quality measurements reveal an almost diffraction-limited beam and enable conclusions concerning the thermal lens, which is essential to stabilize the plane-plane resonator. A 7% output coupler transmissivity is used to generate pulses with energies in the microjoule regime and peak powers of more than 600 W. Long-term pulse duration, pulse energy, central wavelength, and spectral bandwidth measurements emphasize the excellent system stability and facilitate the utilization of this laser in the context of a LiDAR system.

Keywords: diode-pumping, LiDAR system, microchip laser, Nd:YVO4 laser, passively Q-switched

Procedia PDF Downloads 104
400 The Factors Constitute the Interaction between Teachers and Students: An Empirical Study at the Notion of Framing

Authors: Tien-Hui Chiang

Abstract:

The code theory, proposed by Basil Bernstein, indicates that framing can be viewed as the core element in constituting the phenomenon of cultural reproduction because it is able to regulate the transmission of pedagogical information. Strong framing increases the social relation boundary between a teacher and pupils, which obstructs information transmission, so that in order to improve underachieving students’ academic performances, teachers need to reduce to strength of framing. Weak framing enables them to transform academic knowledge into commonsense knowledge in daily life language. This study posits that most teachers would deliver strong framing due to their belief mainly confined within the aspect of instrumental rationality that deprives their critical minds. This situation could make them view the normal distribution bell curve of students’ academic performances as a natural outcome. In order to examine the interplay between framing, instrumental rationality and pedagogical action, questionnaires were completed by over 5,000 primary school teachers in Henan province, China, who were stratified sample. The statistical results show that most teachers employed psychological concepts to measure students’ academic performances and, in turn, educational inequity was legitimatized as a natural outcome in the efficiency-led approach. Such efficiency-led minds made them perform as the agent practicing the mechanism of social control and in turn sustaining the phenomenon of cultural reproduction.

Keywords: code, cultural reproduction, framing, instrumental rationality, social relation and interaction

Procedia PDF Downloads 118
399 Structural Health Monitoring of Buildings–Recorded Data and Wave Method

Authors: Tzong-Ying Hao, Mohammad T. Rahmani

Abstract:

This article presents the structural health monitoring (SHM) method based on changes in wave traveling times (wave method) within a layered 1-D shear beam model of structure. The wave method measures the velocity of shear wave propagating in a building from the impulse response functions (IRF) obtained from recorded data at different locations inside the building. If structural damage occurs in a structure, the velocity of wave propagation through it changes. The wave method analysis is performed on the responses of Torre Central building, a 9-story shear wall structure located in Santiago, Chile. Because events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded at this building, therefore it can serve as a full-scale benchmark to validate the structural health monitoring method utilized. The analysis of inter-story drifts and the Fourier spectra for the EW and NS motions during 2010 Chile earthquake are presented. The results for the NS motions suggest the coupling of translation and torsion responses. The system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) were detected initially decreasing approximately 24% in the EW motion. Near the end of shaking, an increase of about 17% was detected. These analysis and results serve as baseline indicators of the occurrence of structural damage. The detected changes in wave velocities of the shear beam model are consistent with the observed damage. However, the 1-D shear beam model is not sufficient to simulate the coupling of translation and torsion responses in the NS motion. The wave method is proven for actual implementation in structural health monitoring systems based on carefully assessing the resolution and accuracy of the model for its effectiveness on post-earthquake damage detection in buildings.

Keywords: Chile earthquake, damage detection, earthquake response, impulse response function, shear beam model, shear wave velocity, structural health monitoring, torre central building, wave method

Procedia PDF Downloads 342
398 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 307
397 Comparison of Direction of Arrival Estimation Method for Drone Based on Phased Microphone Array

Authors: Jiwon Lee, Yeong-Ju Go, Jong-Soo Choi

Abstract:

Drones were first developed for military use and were used in World War 1. But recently drones have been used in a variety of fields. Several companies actively utilize drone technology to strengthen their services, and in agriculture, drones are used for crop monitoring and sowing. Other people use drones for hobby activities such as photography. However, as the range of use of drones expands rapidly, problems caused by drones such as improperly flying, privacy and terrorism are also increasing. As the need for monitoring and tracking of drones increases, researches are progressing accordingly. The drone detection system estimates the position of the drone using the physical phenomena that occur when the drones fly. The drone detection system measures being developed utilize many approaches, such as radar, infrared camera, and acoustic detection systems. Among the various drone detection system, the acoustic detection system is advantageous in that the microphone array system is small, inexpensive, and easy to operate than other systems. In this paper, the acoustic signal is acquired by using minimum microphone when drone is flying, and direction of drone is estimated. When estimating the Direction of Arrival(DOA), there is a method of calculating the DOA based on the Time Difference of Arrival(TDOA) and a method of calculating the DOA based on the beamforming. The TDOA technique requires less number of microphones than the beamforming technique, but is weak in noisy environments and can only estimate the DOA of a single source. The beamforming technique requires more microphones than the TDOA technique. However, it is strong against the noisy environment and it is possible to simultaneously estimate the DOA of several drones. When estimating the DOA using acoustic signals emitted from the drone, it is impossible to measure the position of the drone, and only the direction can be estimated. To overcome this problem, in this work we show how to estimate the position of drones by arranging multiple microphone arrays. The microphone array used in the experiments was four tetrahedral microphones. We simulated the performance of each DOA algorithm and demonstrated the simulation results through experiments.

Keywords: acoustic sensing, direction of arrival, drone detection, microphone array

Procedia PDF Downloads 125
396 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics

Procedia PDF Downloads 106
395 Precise Spatially Selective Photothermolysis Skin Treatment by Multiphoton Absorption

Authors: Yimei Huang, Harvey Lui, Jianhua Zhao, Zhenguo Wu, Haishan Zeng

Abstract:

Conventional laser treatment of skin diseases and cosmetic surgery is based on the principle of one-photon absorption selective photothermolysis which relies strongly on the difference in the light absorption between the therapeutic target and its surrounding tissue. However, when the difference in one-photon absorption is not sufficient, collateral damage would occur due to indiscriminate and nonspecific tissue heating. To overcome this problem, we developed a spatially selective photothermolysis method based on multiphoton absorption in which the heat generation is restricted to the focal point of a tightly focused near-infrared femtosecond laser beam aligned with the target of interest. A multimodal optical microscope with co-registered reflectance confocal imaging (RCM), two-photon fluorescence imaging (TPF), and second harmonic generation imaging (SHG) capabilities was used to perform and monitor the spatially selective photothermolysis. Skin samples excised from the shaved backs of euthanized NODSCID mice were used in this study. Treatments were performed by focusing and scaning the laser beam in the dermis with a 50µm×50µm target area. Treatment power levels of 200 mW to 400 mW and modulated pulse trains of different duration and period were experimented. Different treatment parameters achieved different degrees of spatial confinement of tissue alterations as visualized by 3-D RCM/TPF/SHG imaging. At 200 mW power level, 0.1 s pulse train duration, 4.1 s pulse train period, the tissue damage was found to be restricted precisely to the 50µm×50µm×10µm volume, where the laser focus spot had scanned through. The overlying epidermis/dermis tissue and the underneath dermis tissue were intact although there was light passing through these regions.

Keywords: multiphoton absorption photothermolysis, reflectance confocal microscopy, second harmonic generation microscopy, spatially selective photothermolysis, two-photon fluorescence microscopy

Procedia PDF Downloads 490
394 Contextual Enablers and Behaviour Outputs for Action of Knowledge Workers

Authors: Juan-Gabriel Cegarra-Navarro, Alexeis Garcia-Perez, Denise Bedford

Abstract:

This paper provides guidelines for what constitutes a knowledge worker. Many graduates from non-managerial domains adopt, at some point in their professional careers, management roles at different levels, ranging from team leaders through to executive leadership. This is particularly relevant for professionals from an engineering background. Moving from a technical to an executive-level requires an understanding of those behaviour management techniques that can motivate and support individuals and their performance. Further, the transition to management also demands a shift of contextual enablers from tangible to intangible resources, which allows individuals to create new capacities, competencies, and capabilities. In this dynamic process, the knowledge worker becomes that key individual who can help members of the management board to transform information into relevant knowledge. However, despite its relevance in shaping the future of the organization in its transition to the knowledge economy, the role of a knowledge worker has not yet been studied to an appropriate level in the current literature. In this study, the authors review both the contextual enablers and behaviour outputs related to the role of the knowledge worker and relate these to their ability to deal with everyday management issues such as knowledge heterogeneity, varying motivations, information overload, or outdated information. This study highlights that the aggregate of capacities, competences and capabilities (CCCs) can be defined as knowledge structures, the study proposes several contextual enablers and behaviour outputs that knowledge workers can use to work cooperatively, acquire, distribute and knowledge. Therefore, this study contributes to a better comprehension of how CCCs can be managed at different levels through their contextual enablers and behaviour outputs.

Keywords: knowledge workers, capabilities, capacities, competences, knowledge structures

Procedia PDF Downloads 126
393 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet

Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi

Abstract:

One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).

Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)

Procedia PDF Downloads 414
392 The Evaluation of Gravity Anomalies Based on Global Models by Land Gravity Data

Authors: M. Yilmaz, I. Yilmaz, M. Uysal

Abstract:

The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.

Keywords: free-air gravity anomaly, Bouguer gravity anomaly, global model, land gravity

Procedia PDF Downloads 141
391 Data Analytics in Energy Management

Authors: Sanjivrao Katakam, Thanumoorthi I., Antony Gerald, Ratan Kulkarni, Shaju Nair

Abstract:

With increasing energy costs and its impact on the business, sustainability today has evolved from a social expectation to an economic imperative. Therefore, finding methods to reduce cost has become a critical directive for Industry leaders. Effective energy management is the only way to cut costs. However, Energy Management has been a challenge because it requires a change in old habits and legacy systems followed for decades. Today exorbitant levels of energy and operational data is being captured and stored by Industries, but they are unable to convert these structured and unstructured data sets into meaningful business intelligence. It must be noted that for quick decisions, organizations must learn to cope with large volumes of operational data in different formats. Energy analytics not only helps in extracting inferences from these data sets, but also is instrumental in transformation from old approaches of energy management to new. This in turn assists in effective decision making for implementation. It is the requirement of organizations to have an established corporate strategy for reducing operational costs through visibility and optimization of energy usage. Energy analytics play a key role in optimization of operations. The paper describes how today energy data analytics is extensively used in different scenarios like reducing operational costs, predicting energy demands, optimizing network efficiency, asset maintenance, improving customer insights and device data insights. The paper also highlights how analytics helps transform insights obtained from energy data into sustainable solutions. The paper utilizes data from an array of segments such as retail, transportation, and water sectors.

Keywords: energy analytics, energy management, operational data, business intelligence, optimization

Procedia PDF Downloads 334
390 Liver and Liver Lesion Segmentation From Abdominal CT Scans

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

The interpretation of medical images benefits from anatomical and physiological priors to optimize computer- aided diagnosis applications. Segmentation of liver and liver lesion is regarded as a major primary step in computer aided diagnosis of liver diseases. Precise liver segmentation in abdominal CT images is one of the most important steps for the computer-aided diagnosis of liver pathology. In this papers, a semi- automated method for medical image data is presented for the liver and liver lesion segmentation data using mathematical morphology. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological filters to extract the liver. The second step consists to detect the liver lesion. In this task; we proposed a new method developed for the semi-automatic segmentation of the liver and hepatic lesions. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to improve the quality of the original image and image gradient by applying the spatial filter followed by the morphological filters. The second step consists to calculate the internal and external markers of the liver and hepatic lesions. Thereafter we proceed to the liver and hepatic lesions segmentation by the watershed transform controlled by markers. The validation of the developed algorithm is done using several images. Obtained results show the good performances of our proposed algorithm

Keywords: anisotropic diffusion filter, CT images, hepatic lesion segmentation, Liver segmentation, morphological filter, the watershed algorithm

Procedia PDF Downloads 421
389 The Impact of Regulatory Changes on the Development of Mobile Medical Apps

Authors: M. McHugh, D. Lillis

Abstract:

Mobile applications are being used to perform a wide variety of tasks in day-to-day life, ranging from checking email to controlling your home heating. Application developers have recognized the potential to transform a smart device into a medical device, by using a mobile medical application i.e. a mobile phone or a tablet. When initially conceived these mobile medical applications performed basic functions e.g. BMI calculator, accessing reference material etc.; however, increasing complexity offers clinicians and patients a range of functionality. As this complexity and functionality increases, so too does the potential risk associated with using such an application. Examples include any applications that provide the ability to inflate and deflate blood pressure cuffs, as well as applications that use patient-specific parameters and calculate dosage or create a dosage plan for radiation therapy. If an unapproved mobile medical application is marketed by a medical device organization, then they face significant penalties such as receiving an FDA warning letter to cease the prohibited activity, fines and possibility of facing a criminal conviction. Regulatory bodies have finalized guidance intended for mobile application developers to establish if their applications are subject to regulatory scrutiny. However, regulatory controls appear contradictory with the approaches taken by mobile application developers who generally work with short development cycles and very little documentation and as such, there is the potential to stifle further improvements due to these regulations. The research presented as part of this paper details how by adopting development techniques, such as agile software development, mobile medical application developers can meet regulatory requirements whilst still fostering innovation.

Keywords: agile, applications, FDA, medical, mobile, regulations, software engineering, standards

Procedia PDF Downloads 337
388 Water-Repellent Coating Based on Thermoplastic Polyurethane, Silica Nanoparticles and Graphene Nanoplatelets

Authors: S. Naderizadeh, A. Athanassiou, I. S. Bayer

Abstract:

This work describes a layer-by-layer spraying method to produce a non-wetting coating, based on thermoplastic polyurethane (TPU) and silica nanoparticles (Si-NPs). The main purpose of this work was to transform a hydrophilic polymer to superhydrophobic coating. The contact angle of pure TPU was measured about 77˚ ± 2, and water droplets did not roll away upon tilting even at 90°. But after applying a layer of Si-NPs on top of this, not only the contact angle increased to 165˚ ± 2, but also water droplets can roll away even below 5˚ tilting. The most important restriction in this study was the weak interfacial adhesion between polymer and nanoparticles, which had a bad effect on durability of the coatings. To overcome this problem, we used a very thin layer of graphene nanoplatelets (GNPs) as an interlayer between TPU and Si-NPs layers, followed by thermal treatment at 150˚C. The sample’s morphology and topography were characterized by scanning electron microscopy (SEM), EDX analysis and atomic force microscopy (AFM). It was observed that Si-NPs embedded into the polymer phase in the presence of GNPs layer. It is probably because of the high surface area and considerable thermal conductivity of the graphene platelets. The contact angle value for the sample containing graphene decreased a little bit respected to the coating without graphene and reached to 156.4˚ ± 2, due to the depletion of the surface roughness. The durability of the coatings against abrasion was evaluated by Taber® abrasion test, and it was observed that superhydrophobicity of the coatings remains for a longer time, in the presence of GNPs layer. Due to the simple fabrication method and good durability of the coating, this coating can be used as a durable superhydrophobic coating for metals and can be produced in large scale.

Keywords: graphene, silica nanoparticles, superhydrophobicity, thermoplastic polyurethane

Procedia PDF Downloads 159
387 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images

Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu

Abstract:

The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.

Keywords: level set model, multi-temporal image, lake contour extraction, contour update

Procedia PDF Downloads 335
386 Rainfall Estimation over Northern Tunisia by Combining Meteosat Second Generation Cloud Top Temperature and Tropical Rainfall Measuring Mission Microwave Imager Rain Rates

Authors: Saoussen Dhib, Chris M. Mannaerts, Zoubeida Bargaoui, Ben H. P. Maathuis, Petra Budde

Abstract:

In this study, a new method to delineate rain areas in northern Tunisia is presented. The proposed approach is based on the blending of the geostationary Meteosat Second Generation (MSG) infrared channel (IR) with the low-earth orbiting passive Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). To blend this two products, we need to apply two main steps. Firstly, we have to identify the rainy pixels. This step is achieved based on a classification using MSG channel IR 10.8 and the water vapor WV 0.62, applying a threshold on the temperature difference of less than 11 Kelvin which is an approximation of the clouds that have a high likelihood of precipitation. The second step consists on fitting the relation between IR cloud top temperature with the TMI rain rates. The correlation coefficient of these two variables has a negative tendency, meaning that with decreasing temperature there is an increase in rainfall intensity. The fitting equation will be applied for the whole day of MSG 15 minutes interval images which will be summed. To validate this combined product, daily extreme rainfall events occurred during the period 2007-2009 were selected, using a threshold criterion for large rainfall depth (> 50 mm/day) occurring at least at one rainfall station. Inverse distance interpolation method was applied to generate rainfall maps for the drier summer season (from May to October) and the wet winter season (from November to April). The evaluation results of the estimated rainfall combining MSG and TMI was very encouraging where all the events were detected rainy and the correlation coefficients were much better than previous evaluated products over the study area such as MSGMPE and PERSIANN products. The combined product showed a better performance during wet season. We notice also an overestimation of the maximal estimated rain for many events.

Keywords: combination, extreme, rainfall, TMI-MSG, Tunisia

Procedia PDF Downloads 145
385 Exploring the Cross-Cultural Practice of Transnational Community in Taiwan

Authors: Ya-Hsuan Wang

Abstract:

This project of intercultural education aimed to explore pluricultural people’s interpretation and evaluation of the transnational community in Taiwan. Based on transnationalism and transculturalism, this study concerns the human right issues for immigrants and pluricultural people. Research participants as immigrants in Taiwan were asked about their typical thinking styles in the transnational community, their cultural integration in terms of transnational behaviors, and their collective memory of the transnational community. Interview questions included what key factors were involved in their identity negotiation, what roles the transnational community and collective memory would be for their identity negotiation and what were the positive or negative aspects impacting cross-border identity. Based on the experiences of pluricultural people and transnational communities, this project expected to enhance the depth and width of developing transcultural knowledge in textbook reform on History in K-12 schools. It is to transform cross-border identity into knowledge embedded with local culture in response to globalization and localization. The purpose of this paper is to portrait the cross-cultural practice of transnational community for Taiwan’s immigrants. It is to report their external socio-cultural expectation of ethnic economics, to understand their internal life course of national identity, and to clarify transnational community in relation to their cross-border identity. In conclusion, the cross-cultural practice of transnational community combined the external contexts such as ethnic economic interaction among transnational communities, social report and ethnic industry, and the internal contexts such as ethnic identity, language use, and collective memory in ethnic history.

Keywords: cross-cultural practice, immigrants, pluricultural people, transnational community

Procedia PDF Downloads 169
384 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science

Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier

Abstract:

Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and compared

Keywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis

Procedia PDF Downloads 75
383 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area

Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma

Abstract:

The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.

Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty

Procedia PDF Downloads 58
382 Effects of Spectrotemporal Modulation of Music Profiles on Coherence of Cardiovascular Rhythms

Authors: I-Hui Hsieh, Yu-Hsuan Hu

Abstract:

The powerful effect of music is often associated with changes in physiological responses such as heart rate and respiration. Previous studies demonstrate that Mayer waves of blood pressure, the spontaneous rhythm occurring at 0.1 Hz, corresponds to a progressive crescendo of the musical phrase. However, music contain dynamic changes in temporal and spectral features. As such, it remains unclear which aspects of musical structures optimally affect synchronization of cardiovascular rhythms. This study investigates the independent contribution of spectral pattern, temporal pattern, and dissonance level on synchronization of cardiovascular rhythms. The regularity of acoustical patterns occurring at a periodic rhythm of 0.1 Hz is hypothesized to elicit the strongest coherence of cardiovascular rhythms. Music excerpts taken from twelve pieces of Western classical repertoire were modulated to contain varying degrees of pattern regularity of the acoustic envelope structure. Three levels of dissonance were manipulated by varying the harmonic structure of the accompanying chords. Electrocardiogram and photoplethysmography signals were recorded for 5 minutes of baseline and simultaneously while participants listen to music excerpts randomly presented over headphones in a sitting position. Participants were asked to indicate the pleasantness of each music excerpt by adjusting via a slider presented on screen. Analysis of the Fourier spectral power of blood pressure around 0.1 Hz showed a significant difference between music excerpts characterized by spectral and temporal pattern regularity compared to the same content in random pattern. Phase coherence between heart rate and blood pressure increased significantly during listening to spectrally-regular phrases compared to its matched control phrases. The degree of dissonance of the accompanying chord sequence correlated with level of coherence between heart rate and blood pressure. Results suggest that low-level auditory features of music can entrain coherence of autonomic physiological variables. These findings have potential implications for using music as a clinical and therapeutic intervention for regulating cardiovascular functions.

Keywords: cardiovascular rhythms, coherence, dissonance, pattern regularity

Procedia PDF Downloads 124