Search results for: conformal invariance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 92

Search results for: conformal invariance

32 A Leaf-Patchable Reflectance Meter for in situ Continuous Monitoring of Chlorophyll Content

Authors: Kaiyi Zhang, Wenlong Li, Haicheng Li, Yifei Luo, Zheng Li, Xiaoshi Wang, Xiaodong Chen

Abstract:

Plant wearable sensors facilitate the real-time monitoring of plant physiological status. In situ monitoring of the plant chlorophyll content over days could provide valuable information on the photosynthetic capacity, nitrogen content, and general plant health. However, it cannot be achieved by current chlorophyll measuring methods. Here, a miniaturized and plant-wearable chlorophyll meter was developed for rapid, non-destructive, in situ, and long-term chlorophyll monitoring. This reflectance-based chlorophyll sensor with 1.5 mm thickness and 0.2 g weight (1000 times lighter than the commercial chlorophyll meter), includes a light emitting diode (LED) and two symmetric photodetectors (PDs) on a flexible substrate and is patched onto the leaf upper epidermis with a conformal light guiding layer. A chlorophyll content index (CCI) calculated based on this sensor shows a better linear relationship with the leaf chlorophyll content (r² > 0.9) than the traditional chlorophyll meter. This meter can wirelessly communicate with a smartphone to monitor the leaf chlorophyll change under various stresses and indicate the unhealthy status of plants for long-term application of plants under various stresses earlier than chlorophyll meter and naked-eye observation. This wearable chlorophyll sensing patch is promising in smart and precision agriculture.

Keywords: plant wearable sensors, reflectance-based measurements, chlorophyll content monitoring, smart agriculture

Procedia PDF Downloads 86
31 Effect of Velocity Slip on Two Phase Flow in an Eccentric Annular Region

Authors: Umadevi B., Dinesh P. A., Indira. R., Vinay C. V.

Abstract:

A mathematical model is developed to study the simultaneous effects of particle drag and slip parameter on the velocity as well as rate of flow in an annular cross sectional region bounded by two eccentric cylinders. In physiological flows this phenomena can be observed in an eccentric catheterized artery with inner cylinder wall is impermeable and outer cylinder wall is permeable. Blood is a heterogeneous fluid having liquid phase consisting of plasma in which a solid phase of suspended cells and proteins. Arterial wall gets damaged due to aging and lipid molecules get deposited between damaged tissue cells. Blood flow increases towards the damaged tissues in the artery. In this investigation blood is modeled as two phase fluid as one is a fluid phase and the other is particulate phase. The velocity of the fluid phase and rate of flow are obtained by transforming eccentric annulus to concentric annulus with the conformal mapping. The formulated governing equations are analytically solved for the velocity and rate of flow. The numerical investigations are carried out by varying eccentricity parameter, slip parameter and drag parameter. Enhancement of slip parameter signifies loss of fluid then the velocity and rate of flow will be decreased. As particulate drag parameter increases then the velocity as well as rate flow decreases. Eccentricity facilitates transport of more fluid then the velocity and rate of flow increases.

Keywords: catheter, slip parameter, drag parameter, eccentricity

Procedia PDF Downloads 505
30 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils

Authors: Muqdad Al-Juboori, Bithin Datta

Abstract:

Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.

Keywords: artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis

Procedia PDF Downloads 205
29 Experimental Modelling Gear Contact with TE77 Energy Pulse Setup

Authors: Zainab Mohammed Shukur, Najlaa Ali Alboshmina, Ali Safa Alsaegh

Abstract:

The project was investigated tribological behavior of polyether ether ketone (PEEK1000) against PEEK1000 rolling sliding (non-conformal) configuration with slip ratio 83.3%, were tested applications using a TE77 wear mechanisms and friction coefficient test rig. Under marginal lubrication conditions and the absence of film thick conditions, load 100 N was used to simulate the torque in gears 7 N.m. The friction coefficient and wear mechanisms of PEEK were studied under reciprocating roll/slide conditions with water, ethylene glycol, silicone, and base oil. Tribological tests were conducted on a TE77 high-frequency tribometer, with a disc-on-plate slide/roll (the energy pulse criterion) configuration. An Alicona G5 optical 3D micro-coordinate measurement microscope was used to investigate the surface topography and wear mechanisms. The surface roughness had been a significant effect on the friction coefficient for the PEEK/PEEK the rolling sliding contact test ethylene glycol and on the wear mechanisms. When silicone, ethylene glycol, and oil were used as a lubricant, the steady state of friction coefficient was reached faster than the other lubricant. Results describe the effect of the film thick with slip ratio of 83.3% on the tribological performance.

Keywords: polymer, rolling- sliding, energy pulse, gear contact

Procedia PDF Downloads 124
28 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality

Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye

Abstract:

When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.

Keywords: word embeddings, k-mer embedding, dimensionality reduction

Procedia PDF Downloads 114
27 Isolated Iterating Fractal Independently Corresponds with Light and Foundational Quantum Problems

Authors: Blair D. Macdonald

Abstract:

After nearly one hundred years of its origin, foundational quantum mechanics remains one of the greatest unexplained mysteries in physicists today. Within this time, chaos theory and its geometry, the fractal, has developed. In this paper, the propagation behaviour with an iteration of a simple fractal, the Koch Snowflake, was described and analysed. From an arbitrary observation point within the fractal set, the fractal propagates forward by oscillation—the focus of this study and retrospectively behind by exponential growth from a point beginning. It propagates a potentially infinite exponential oscillating sinusoidal wave of discrete triangle bits sharing many characteristics of light and quantum entities. The model's wave speed is potentially constant, offering insights into the perception and a direction of time where, to an observer, when travelling at the frontier of propagation, time may slow to a stop. In isolation, the fractal is a superposition of component bits where position and scale present a problem of location. In reality, this problem is experienced within fractal landscapes or fields where 'position' is only 'known' by the addition of information or markers. The quantum' measurement problem', 'uncertainty principle,' 'entanglement,' and the classical-quantum interface are addressed; these are a problem of scale invariance associated with isolated fractality. Dual forward and retrospective perspectives of the fractal model offer the opportunity for unification between quantum mechanics and cosmological mathematics, observations, and conjectures. Quantum and cosmological problems may be different aspects of the one fractal geometry.

Keywords: measurement problem, observer, entanglement, unification

Procedia PDF Downloads 69
26 Hybrid Thresholding Lifting Dual Tree Complex Wavelet Transform with Wiener Filter for Quality Assurance of Medical Image

Authors: Hilal Naimi, Amelbahahouda Adamou-Mitiche, Lahcene Mitiche

Abstract:

The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. This research proposes a denoising approach basing on LDTCWT (Lifting Dual Tree Complex Wavelet Transform) using Hybrid Thresholding with Wiener filter to enhance the quality image. This research describes the LDTCWT as a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.

Keywords: lifting wavelet transform, image denoising, dual tree complex wavelet transform, wavelet shrinkage, wiener filter

Procedia PDF Downloads 139
25 The Three-Zone Composite Productivity Model of Multi-Fractured Horizontal Wells under Different Diffusion Coefficients in a Shale Gas Reservoir

Authors: Weiyao Zhu, Qian Qi, Ming Yue, Dongxu Ma

Abstract:

Due to the nano-micro pore structures and the massive multi-stage multi-cluster hydraulic fracturing in shale gas reservoirs, the multi-scale seepage flows are much more complicated than in most other conventional reservoirs, and are crucial for the economic development of shale gas. In this study, a new multi-scale non-linear flow model was established and simplified, based on different diffusion and slip correction coefficients. Due to the fact that different flow laws existed between the fracture network and matrix zone, a three-zone composite model was proposed. Then, according to the conformal transformation combined with the law of equivalent percolation resistance, the productivity equation of a horizontal fractured well, with consideration given to diffusion, slip, desorption, and absorption, was built. Also, an analytic solution was derived, and the interference of the multi-cluster fractures was analyzed. The results indicated that the diffusion of the shale gas was mainly in the transition and Fick diffusion regions. The matrix permeability was found to be influenced by slippage and diffusion, which was determined by the pore pressure and diameter according to the Knudsen number. It was determined that, with the increased half-lengths of the fracture clusters, flow conductivity of the fractures, and permeability of the fracture network, the productivity of the fractured well also increased. Meanwhile, with the increased number of fractures, the distance between the fractures decreased, and the productivity slowly increased due to the mutual interference of the fractures. In regard to the fractured horizontal wells, the free gas was found to majorly contribute to the productivity, while the contribution of the desorption increased with the increased pressure differences.

Keywords: multi-scale, fracture network, composite model, productivity

Procedia PDF Downloads 252
24 Determination of Optimum Parameters for Thermal Stress Distribution in Composite Plate Containing a Triangular Cutout by Optimization Method

Authors: Mohammad Hossein Bayati Chaleshtari, Hadi Khoramishad

Abstract:

Minimizing the stress concentration around triangular cutout in infinite perforated plates subjected to a uniform heat flux induces thermal stresses is an important consideration in engineering design. Furthermore, understanding the effective parameters on stress concentration and proper selection of these parameters enables the designer to achieve a reliable design. In the analysis of thermal stress, the effective parameters on stress distribution around cutout include fiber angle, flux angle, bluntness and rotation angle of the cutout for orthotropic materials. This paper was tried to examine effect of these parameters on thermal stress analysis of infinite perforated plates with central triangular cutout. In order to achieve the least amount of thermal stress around a triangular cutout using a novel swarm intelligence optimization technique called dragonfly optimizer that inspired by the life method and hunting behavior of dragonfly in nature. In this study, using the two-dimensional thermoelastic theory and based on the Likhnitskiiʼ complex variable technique, the stress analysis of orthotropic infinite plate with a circular cutout under a uniform heat flux was developed to the plate containing a quasi-triangular cutout in thermal steady state condition. To achieve this goal, a conformal mapping function was used to map an infinite plate containing a quasi- triangular cutout into the outside of a unit circle. The plate is under uniform heat flux at infinity and Neumann boundary conditions and thermal-insulated condition at the edge of the cutout were considered.

Keywords: infinite perforated plate, complex variable method, thermal stress, optimization method

Procedia PDF Downloads 121
23 Temporal Focus Scale: Examination of the Reliability and Validity in Japanese Adolescents and Young Adults

Authors: Yuta Chishima, Tatsuya Murakami, Michael McKay

Abstract:

Temporal focus is described as one component of an individual’s time perspective and defined as the attention individuals devote to thinking about the past, present, and future. It affects how people incorporate perceptions about past experiences, current situations, and future expectations into their attitudes, cognitions, and behavior. The 12-item Temporal Focus Scale (TFS) is comprised of three-factors (past, current and future focus). The purpose of this study was to examine the reliability and validity of TFS scores in Japanese adolescents and young adults. The TFS was translated into Japanese by a professional translator, and the original author confirmed the back translated items. Study 1 involved 979 Japanese university students aged 18-25 years old in a questionnaire-based study. The hypothesized three-factor structure (with reliability) was confirmed, although there were problems with item 10. Internal consistency estimates for scores without item 10 were over .70, and test-retest reliability was also adequate. To verify the concurrent and convergent validity, we tested the relationship between TFS scores and life satisfaction, time perspective, self-esteem, and career efficacy. Results of correlational analyses supported our hypotheses. Specifically, future focus was strongly correlated to career efficacy, while past and current focus was not. Study 2 involved 1030 Japanese junior and junior high school students aged 12-18 years old in a questionnaire-based study, and results of multigroup analyses supported the age invariance of the TFS.

Keywords: Japanese, reliability, scale, temporal focus, validity

Procedia PDF Downloads 328
22 Development of a Sequential Multimodal Biometric System for Web-Based Physical Access Control into a Security Safe

Authors: Babatunde Olumide Olawale, Oyebode Olumide Oyediran

Abstract:

The security safe is a place or building where classified document and precious items are kept. To prevent unauthorised persons from gaining access to this safe a lot of technologies had been used. But frequent reports of an unauthorised person gaining access into security safes with the aim of removing document and items from the safes are pointers to the fact that there is still security gap in the recent technologies used as access control for the security safe. In this paper we try to solve this problem by developing a multimodal biometric system for physical access control into a security safe using face and voice recognition. The safe is accessed by the combination of face and speech pattern recognition and also in that sequential order. User authentication is achieved through the use of camera/sensor unit and a microphone unit both attached to the door of the safe. The user face was captured by the camera/sensor while the speech was captured by the use of the microphone unit. The Scale Invariance Feature Transform (SIFT) algorithm was used to train images to form templates for the face recognition system while the Mel-Frequency Cepitral Coefficients (MFCC) algorithm was used to train the speech recognition system to recognise authorise user’s speech. Both algorithms were hosted in two separate web based servers and for automatic analysis of our work; our developed system was simulated in a MATLAB environment. The results obtained shows that the developed system was able to give access to authorise users while declining unauthorised person access to the security safe.

Keywords: access control, multimodal biometrics, pattern recognition, security safe

Procedia PDF Downloads 308
21 Role of Micro-Patterning on Stem Cell-Material Interaction Modulation and Cell Fate

Authors: Lay Poh Tan, Chor Yong Tay, Haiyang Yu

Abstract:

Micro-contact printing is a form of soft lithography that uses the relief patterns on a master polydimethylsiloxane (PDMS) stamp to form patterns of self-assembled monolayers (SAMs) of ink on the surface of a substrate through conformal contact technique. Here, we adopt this method to print proteins of different dimensions on our biodegradable polymer substrates. We started off with printing 20-500 μm scale lanes of fibronectin to engineer the shape of bone marrow derived human mesenchymal stem cell (hMSCs). After 8 hours of culture, the hMSCs adopted elongated shapes, and upon analysis of the gene expressions, genes commonly associated with myogenesis (GATA-4, MyoD1, cTnT and β-MHC) and neurogenesis (NeuroD, Nestin, GFAP, and MAP2) were up-regulated but gene expression associated to osteogenesis (ALPL, RUNX2, and SPARC) were either down modulated or remained at the nominal level. This is the first evidence that cellular morphology control via micropatterning could be used to modulate stem cell fate without external biochemical stimuli. We further our studies to modulate the focal adhesion (FA) instead of the macro shape of cells. Micro-contact printed islands of different smaller dimensions were investigated. We successfully regulated the FAs into dense FAs and elongated FAs by micropatterning. Additionally, the combined effects of hard (40.4 kPa), and intermediate (10.6 kPa) PA gel and FAs patterning on hMSCs differentiation were studied. Results showed that FA and matrix compliance plays an important role in hMSCs differentiation, and there is a cross-talk between different physical stimulants and the significance of these stimuli can only be realized if they are combined at the optimum level.

Keywords: micro-contact printing, polymer substrate, cell-material interaction, stem cell differentiation

Procedia PDF Downloads 153
20 Classification on Statistical Distributions of a Complex N-Body System

Authors: David C. Ni

Abstract:

Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.

Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification

Procedia PDF Downloads 286
19 Screening Psychological Wellness in a South African Banking Industry: Psychometric Properties of the Sense of Coherence-29 Questionnaire and Multifactor Leadership Questionnaire

Authors: Nisha Harry, Keshia Sing

Abstract:

Orientation: The Multifactor Leadership Questionnaire (MLF) and the sense of coherence-29 (SCS) is an effective tools to assess the prevalence and underlying structures of empirically based taxonomies related to leadership and wellbeing. Research purpose: The purpose of the study was to test the psychometric properties of the SCS and Multifactor Leadership Questionnaire (MLQ) to screen for psychological wellness indices within the banking industry in South Africa. Motivation for the study: The contribution of these two instruments for the purpose of determining psychological wellness in a banking work environment is unique. Research design, approach, or method: The sample consisted of (N = 150) financial staff employed in a South African banking organisation. The age of the sample was: 37% (30 -40 yrs), 31% (20-30 yrs), 26% (40- 50 yrs), and 6% (50+yrs), of which 52% were males, 48% were females. The white race group was the majority at 29%, African at 26%, Coloured at 23%, and Indian was 22%. Main findings: Results from the exploratory factor analysis revealed a two-factor structure as the most satisfactory. Confirmatory factor analyses revealed the two-factor model displayed better good of-fit indices. Practical implications: The factor structure of the Sense of Coherence-29 scale (SCS), and the Multifactor Leadership Questionnaire (MLQ), have a value-added focus to determine psychological wellness within banking staff. It is essential to take into account these constructs when developing employee wellness interventions. Contribution/value add: Understanding the psychometric properties of the SCS, the self-reported form, and the MLQ questionnaire contributes to screening psychological wellness indices such as coping within the banking industry in a developing country like South Africa. Leaders are an important part of the implementation process of organisational employee wellness practices.

Keywords: factorial structure, leadership, measurement invariance, psychological wellness, sense of coherence

Procedia PDF Downloads 79
18 Linearly Polarized Single Photon Emission from Nonpolar, Semipolar and Polar Quantum Dots in GaN/InGaN Nanowires

Authors: Snezana Lazic, Zarko Gacevic, Mark Holmes, Ekaterina Chernysheva, Marcus Müller, Peter Veit, Frank Bertram, Juergen Christen, Yasuhiko Arakawa, Enrique Calleja

Abstract:

The study reports how the pencil-like morphology of a homoepitaxially grown GaN nanowire can be exploited for the fabrication of a thin conformal InGaN nanoshell, hosting nonpolar, semipolar and polar single photon sources (SPSs). All three SPS types exhibit narrow emission lines (FWHM~0.35 - 2 meV) and high degrees of linear optical polarization (P > 70%) in the low-temperature micro-photoluminescence (µ-PL) experiments and are characterized by a pronounced antibunching in the photon correlation measurements (gcorrected(2)(0) < 0.3). The quantum-dot-like exciton localization centers induced by compositional fluctuations within the InGaN nanoshell are identified as the driving mechanism for the single photon emission. As confirmed by the low-temperature transmission electron microscopy combined with cathodoluminescence (TEM-CL) study, the crystal region (i.e. non-polar m-, semi-polar r- and polar c-facets) hosting the single photon emitters strongly affects their emission wavelength, which ranges from ultra-violet for the non-polar to visible for the polar SPSs. The photon emission lifetime is also found to be facet-dependent and varies from sub-nanosecond time scales for the non- and semi-polar SPSs to a few nanoseconds for the polar ones. These differences are mainly attributed to facet-dependent indium content and electric field distribution across the hosting InGaN nanoshell. The hereby reported pencil-like InGaN nanoshell is the first single nanostructure able to host all three types of single photon emitters and is thus a promising building block for tunable quantum light devices integrated into future photonic and optoelectronic circuits.

Keywords: GaN nanowire, InGaN nanoshell, linear polarization, nonpolar, semipolar, polar quantum dots, single-photon sources

Procedia PDF Downloads 368
17 Using Variation Theory in a Design-based Approach to Improve Learning Outcomes of Teachers Use of Video and Live Experiments in Swedish Upper Secondary School

Authors: Andreas Johansson

Abstract:

Conceptual understanding needs to be grounded on observation of physical phenomena, experiences or metaphors. Observation of physical phenomena using demonstration experiments has a long tradition within physics education and students need to develop mental models to relate the observations to concepts from scientific theories. This study investigates how live and video experiments involving an acoustic trap to visualize particle-field interaction, field properties and particle properties can help develop students' mental models and how they can be used differently to realize their potential as teaching tools. Initially, they were treated as analogs and the lesson designs were kept identical. With a design-based approach, the experimental and video designs, as well as best practices for a respective teaching tool, were then developed in iterations. Variation theory was used as a theoretical framework to analyze the planned respective realized pattern of variation and invariance in order to explain learning outcomes as measured by a pre-posttest consisting of conceptual multiple-choice questions inspired by the Force Concept Inventory and the Force and Motion Conceptual Evaluation. Interviews with students and teachers were used to inform the design of experiments and videos in each iteration. The lesson designs and the live and video experiments has been developed to help teachers improve student learning and make school physics more interesting by involving experimental setups that usually are out of reach and to bridge the gap between what happens in classrooms and in science research. As students’ conceptual knowledge also rises their interest in physics the aim is to increase their chances of pursuing careers within science, technology, engineering or mathematics.

Keywords: acoustic trap, design-based research, experiments, variation theory

Procedia PDF Downloads 63
16 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 237
15 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: in-memory database, disk-based system, hybrid database, concurrency control

Procedia PDF Downloads 394
14 Kirigami Designs for Enhancing the Electromechanical Performance of E-Textiles

Authors: Braden M. Li, Inhwan Kim, Jesse S. Jur

Abstract:

One of the fundamental challenges in the electronic textile (e-textile) industry is the mismatch in compliance between the rigid electronic components integrated onto soft textile platforms. To address these problems, various printing technologies using conductive inks have been explored in an effort to improve the electromechanical performance without sacrificing the innate properties of the printed textile. However, current printing methods deposit densely layered coatings onto textile surfaces with low through-plane wetting resulting in poor electromechanical properties. This work presents an inkjet printing technique in conjunction with unique Kirigami cut designs to address these issues for printed smart textiles. By utilizing particle free reactive silver inks, our inkjet process produces conformal and micron thick silver coatings that surround individual fibers of the printed smart textile. This results in a highly conductive (0.63 Ω sq-1) printed e-textile while also maintaining the innate properties of the textile material including stretchability, flexibility, breathability and fabric hand. Kirigami is the Japanese art of paper cutting. By utilizing periodic cut designs, Kirigami imparts enhanced flexibility and delocalization of stress concentrations. Kirigami cut design parameters (i.e., cut spacing and length) were correlated to both the mechanical and electromechanical properties of the printed textiles. We demonstrate that designs using a higher cut-out ratio exponentially softens the textile substrate. Thus, our designs achieve a 30x improvement in the overall stretchability, 1000x decrease in elastic modulus, and minimal resistance change over strain regimes of 100-200% when compared to uncut designs. We also show minimal resistance change of our Kirigami inspired printed devices after being stretched to 100% for 1000 cycles. Lastly, we demonstrate a Kirigami-inspired electrocardiogram (ECG) monitoring system that improves stretchability without sacrificing signal acquisition performance. Overall this study suggests fundamental parameters affecting the performance of e-textiles and their scalability in the wearable technology industry

Keywords: kirigami, inkjet printing, flexible electronics, reactive silver ink

Procedia PDF Downloads 118
13 Introduction of Para-Sasaki-Like Riemannian Manifolds and Construction of New Einstein Metrics

Authors: Mancho Manev

Abstract:

The concept of almost paracontact Riemannian manifolds (abbr., apcR manifolds) was introduced by I. Sato in 1976 as an analogue of almost contact Riemannian manifolds. The notion of an apcR manifold of type (p,q) was defined by S. Sasaki in 1980, where p and q are respectively the numbers of the multiplicity of the structure eigenvalues 1 and -1. It also has a simple eigenvalue of 0. In our work, we consider (2n+1)-dimensional apcR manifolds of type (n,n), i.e., the paracontact distribution of the studied manifold can be considered as a 2n-dimensional almost paracomplex Riemannian distribution with almost paracomplex structure and structure group O(n) × O(n). The aim of the present study is to introduce a new class of apcR manifolds. Such a manifold is obtained using the construction of a certain Riemannian cone over it, and the resulting manifold is a paraholomorphic paracomplex Riemannian manifold (abbr., phpcR manifold). We call it a para-Sasaki-like Riemannian manifold (abbr., pSlR manifold) and give some explicit examples. We study the structure of pSlR spaces and find that the paracontact form η is closed and each pSlR manifold locally can be considered as a certain product of the real line with a phpcR manifold, which is locally a Riemannian product of two equidimensional Riemannian spaces. We also obtain that the curvature of the pSlR manifolds is completely determined by the curvature of the underlying local phpcR manifold. Moreover, the ξ-directed Ricci curvature is equal to -2n, while in the Sasaki case, it is 2n. Accordingly, the pSlR manifolds can be interpreted as the counterpart of the Sasaki manifolds; the skew-symmetric part of ∇η vanishes, while in the Sasaki case, the symmetric part vanishes. We define a hyperbolic extension of a (complete) phpcR manifold that resembles a certain warped product, and we indicate that it is a (complete) pSlR manifold. In addition, we consider the hyperbolic extension of a phpcR manifold and prove that if the initial manifold is a complete Einstein manifold with negative scalar curvature, then the resulting manifold is a complete Einstein pSlR manifold with negative scalar curvature. In this way, we produce new examples of a complete Einstein Riemannian manifold with negative scalar curvature. Finally, we define and study para contact conformal/homothetic deformations by deriving a subclass that preserves the para-Sasaki-like condition. We then find that if we apply a paracontact homothetic deformation of a pSlR space, we obtain that the Ricci tensor is invariant.

Keywords: almost paracontact Riemannian manifolds, Einstein manifolds, holomorphic product manifold, warped product manifold

Procedia PDF Downloads 189
12 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 309
11 Evaluation of Intervention Effectiveness from the Client Perspective: Dimensions and Measurement of Wellbeing

Authors: Neşe Alkan

Abstract:

Purpose: The point that applied/clinical psychology, which is the practice and research discipline of the mental health field, has reached today can be summarized as the necessity of handling the psychological well-being of people from multiple perspectives and the goal of moving it to a higher level. Clients' subjective assessment of their own condition and wellbeing is an integral part of evidence-based interventions. There is a need for tools through which clients can evaluate the effectiveness of the psychotherapy/intervention performed with them and their contribution to the wellbeing and wellbeing of this process in a valid and reliable manner. The aim of this research is to meet this need, to test the reliability and validity of the index in Turkish, and explore its usability in the practices of both researchers and psychotherapists. Method: A total of 213 adults aged between 18-54, 69.5% working and 29.5% university students, were included in the study. Along with their demographic information, the participants were administered a set of scales: wellbeing, life satisfaction, spiritual satisfaction, shopping addiction, and loneliness, namely via an online platform. The construct validity of the wellbeing scale was tested with exploratory and confirmatory factor analyses, convergent and discriminant validity were tested with two-way full and partial correlation analyses and, measurement invariance was tested with one-way analysis of variance. Results: Factor analyzes showed that the scale consisted of six dimensions as it is in its original structure. The internal consistency of the scale was found to be Cronbach α = .82. Two-way correlation analyzes revealed that the wellbeing scale total score was positively correlated with general life satisfaction (r = .62) and spiritual satisfaction (r = .29), as expected. It was negatively correlated with loneliness (r = -.51) and shopping addiction (r = -.15). While the scale score did not vary by gender, previous illness, or nicotine addiction, it was found that the total wellbeing scale scores of the participants who had used antidepressant medication during the past year were lower than those who did not use antidepressant medication (F(1,204) = 7.713, p = .005). Conclusion: It has been concluded that the 12-item wellbeing scale consisting of six dimensions can be used in research and health sciences practices as a valid and reliable measurement tool. Further research which examines the reliability and validity of the scale in different widely used languages such as Spanish and Chinese is recommended.

Keywords: wellbeing, intervention effectiveness, reliability and validity, effectiveness

Procedia PDF Downloads 160
10 Cardiac Pacemaker in a Patient Undergoing Breast Radiotherapy-Multidisciplinary Approach

Authors: B. Petrović, M. Petrović, L. Rutonjski, I. Djan, V. Ivanović

Abstract:

Objective: Cardiac pacemakers are very sensitive to radiotherapy treatment from two sources: electromagnetic influence from the medical linear accelerator producing ionizing radiation- influencing electronics within the pacemaker, and the absorption of dose to the device. On the other hand, patients with cardiac pacemakers at the place of a tumor are rather rare, and single clinic hardly has experience with the management of such patients. The widely accepted international guidelines for management of radiation oncology patients recommend that these patients should be closely monitored and examined before, during and after radiotherapy treatment by cardiologist, and their device and condition followed up. The number of patients having both cancer and pacemaker, is growing every year, as both cancer incidence, as well as cardiac diseases incidence, are inevitably growing figures. Materials and methods: Female patient, age 69, was diagnozed with valvular cardiomyopathy and got implanted a pacemaker in 2005 and prosthetic mitral valve in 1993 (cancer was diagnosed in 2012). She was stable cardiologically and came to radiation therapy department with the diagnosis of right breast cancer, with the tumor in upper lateral quadrant of the right breast. Since she had all lymph nodes positive (28 in total), she had to have irradiated the supraclavicular region, as well as the breast with the tumor bed. She previously received chemotherapy, approved by the cardiologist. The patient was estimated to be with the high risk as device was within the field of irradiation, and the patient had high dependence on her pacemaker. The radiation therapy plan was conducted as 3D conformal therapy. The delineated target was breast with supraclavicular region, where the pacemaker was actually placed, with the addition of a pacemaker as organ at risk, to estimate the dose to the device and its components as recommended, and the breast. The targets received both 50 Gy in 25 fractions (where 20% of a pacemaker received 50 Gy, and 60% of a device received 40 Gy). The electrode to the heart received between 1 Gy and 50 Gy. Verification of dose planned and delivered was performed. Results: Evaluation of the patient status according to the guidelines and especially evaluation of all associated risks to the patient during treatment was done. Patient was irradiated by prescribed dose and followed up for the whole year, with no symptoms of failure of the pacemaker device during, or after treatment in follow up period. The functionality of a device was estimated to be unchanged, according to the parameters (electrode impedance and battery energy). Conclusion: Patient was closely monitored according to published guidelines during irradiation and afterwards. Pacemaker irradiated with the full dose did not show any signs of failure despite recommendations data, but in correlation with other published data.

Keywords: cardiac pacemaker, breast cancer, radiotherapy treatment planning, complications of treatment

Procedia PDF Downloads 416
9 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots

Authors: Mrinalini Ranjan, Sudheesh Chethil

Abstract:

Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.

Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots

Procedia PDF Downloads 157
8 Multi-Institutional Report on Toxicities of Concurrent Nivolumab and Radiation Therapy

Authors: Neha P. Amin, Maliha Zainib, Sean Parker, Malcolm Mattes

Abstract:

Purpose/Objectives: Combination immunotherapy (IT) and radiation therapy (RT) is an actively growing field of clinical investigation due to promising findings of synergistic effects from immune-mediated mechanisms observed in preclinical studies and clinical data from case reports of abscopal effects. While there are many ongoing trials of combined IT-RT, there are still limited data on toxicity and outcome optimization regarding RT dose, fractionation, and sequencing of RT with IT. Nivolumab (NIVO), an anti-PD-1 monoclonal antibody, has been rapidly adopted in the clinic over the past 2 years, resulting in more patients being considered for concurrent RT-NIVO. Knowledge about the toxicity profile of combined RT-NIVO is important for both the patient and physician when making educated treatment decisions. The acute toxicity profile of concurrent RT-NIVO was analyzed in this study. Materials/Methods: A retrospective review of all consecutive patients who received NIVO from 1/2015 to 5/2017 at 4 separate centers within two separate institutions was performed. Those patients who completed a course of RT from 1 day prior to initial NIVO infusion through 1 month after last NIVO infusion were considered to have received concurrent therapy and included in the subsequent analysis. Descriptive statistics are reported for patient/tumor/treatment characteristics and observed acute toxicities within 3 months of RT completion. Results: Among 261 patients who received NIVO, 46 (17.6%) received concurrent RT to 67 different sites. The median f/u was 3.3 (.1-19.8) months, and 11/46 (24%) were still alive at last analysis. The most common histology, RT prescription, and treatment site included non-small cell lung cancer (23/46, 50%), 30 Gy in 10 fractions (16/67, 24%), and central thorax/abdomen (26/67, 39%), respectively. 79% (53/67) of irradiated sites were treated with 3D-conformal technique and palliative dose-fractionation. Grade 3, 4, and 5 toxicities were experienced by 11, 1, and 2 patients, respectively. However all grade 4 and 5 toxicities were outside of the irradiated area and attributed to the NIVO alone, and only 4/11 (36%) of the grade 3 toxicities were attributed to the RT-NIVO. The irradiated site in these cases included the brain [2/10 (20%)] and central thorax/abdomen [2/19 (10.5%)], including one unexpected grade 3 pancreatitides following stereotactic body RT to the left adrenal gland. Conclusions: Concurrent RT-NIVO is generally well tolerated, though with potentially increased rates of severe toxicity when irradiating the lung, abdomen, or brain. Pending more definitive data, we recommend counseling patients on the potentially increased rates of side effects from combined immunotherapy and radiotherapy to these locations. Future prospective trials assessing fractionation and sequencing of RT with IT will help inform combined therapy recommendations.

Keywords: combined immunotherapy and radiation, immunotherapy, Nivolumab, toxicity of concurrent immunotherapy and radiation

Procedia PDF Downloads 369
7 Stress, Anxiety and Its Associated Factors Within the Transgender Population of Delhi: A Cross-Sectional Study

Authors: Annie Singh, Ishaan Singh

Abstract:

Background: Transgenders are people who have a gender identity different from their sex assigned at birth. Their gender behaviour doesn’t match their body anatomy. The community faces discrimination due to their gender identity all across the world. The term transgender is an umbrella term for many people non-conformal to their biological identity; note that the term transgender is different from gender dysphoria, which is a DSM-5 disorder defined as problems faced by an individual due to their non-conforming gender identity. Transgender people have been a part of Indian culture for ages yet have continued to face exclusion and discrimination in society. This has led to the low socio-economic status of the community. Various studies done across the world have established the role of discrimination, harassment and exclusion in the development of psychological disorders. The study is aimed to assess the frequency of stress and anxiety in the transgender population and understand the various factors affecting the same. Methodology: A cross-sectional survey of self consenting transgender individuals above the age of 18 residing in Delhi was done to assess their socioeconomic status and experiential ecology. Recruitment of participants was done with the help of NGOs. The survey was constructed GAD-7 and PSS-10, two well-known scales were used to assess the stress and anxiety levels. Medians, means and ranges are used for reporting continuous data wherever required, while frequencies and percentages are used for categorical data. For associations and comparison between groups in categorical data, the Chi-square test was used, while the Kruskal-Wallis H test was employed for associations involving multiple ordinal groups. SPSS v28.0 was used to perform the statistical analysis for this study. Results: The survey showed that the frequency of stress and anxiety is high in the transgender population. A demographic survey indicates a low socio-economic background. 44% of participants reported facing discrimination on a daily basis; the frequency of discrimination is higher in transwomen than in transmen. Stress and anxiety levels are similar among both transmen and transwomen. Only 34.5% of participants said they had receptive family or friends. The majority of participants (72.7%) reported a positive or neutral experience with healthcare workers. The prevalence of discrimination is significantly lower in the higher educated groups. Analysis of data shows a positive impact of acceptance and reception on mental health, while discrimination is correlated with higher levels of stress and anxiety. Conclusion: The prevalence of widespread transphobia and discrimination faced by the transgender community has culminated in high levels of stress and anxiety in the transgender population and shows variance according to multiple socio-demographic factors. Educating people about the LGBT community formation of support groups, policies and laws are required to establish trust and promote integration.

Keywords: transgender, gender, stress, anxiety, mental health, discrimination, exclusion

Procedia PDF Downloads 94
6 Investigation of Alumina Membrane Coated Titanium Implants on Osseointegration

Authors: Pinar Erturk, Sevde Altuntas, Fatih Buyukserin

Abstract:

In order to obtain an effective integration between an implant and a bone, implant surfaces should have similar properties to bone tissue surfaces. Especially mimicry of the chemical, mechanical and topographic properties of the implant to the bone is crucial for fast and effective osseointegration. Titanium-based biomaterials are more preferred in clinical use, and there are studies of coating these implants with oxide layers that have chemical/nanotopographic properties stimulating cell interactions for enhanced osseointegration. There are low success rates of current implantations, especially in craniofacial implant applications, which are large and vital zones, and the oxide layer coating increases bone-implant integration providing long-lasting implants without requiring revision surgery. Our aim in this study is to examine bone-cell behavior on titanium implants with an aluminum oxide layer (AAO) on effective osseointegration potential in the deformation of large zones with difficult spontaneous healing. In our study, aluminum layer coated titanium surfaces were anodized in sulfuric, phosphoric, and oxalic acid, which are the most common used AAO anodization electrolytes. After morphologic, chemical, and mechanical tests on AAO coated Ti substrates, viability, adhesion, and mineralization of adult bone cells on these substrates were analyzed. Besides with atomic layer deposition (ALD) as a sensitive and conformal technique, these surfaces were coated with pure alumina (5 nm); thus, cell studies were performed on ALD-coated nanoporous oxide layers with suppressed ionic content too. Lastly, in order to investigate the effect of the topography on the cell behavior, flat non-porous alumina layers on silicon wafers formed by ALD were compared with the porous ones. Cell viability ratio was similar between anodized surfaces, but pure alumina coated titanium and anodized surfaces showed a higher viability ratio compared to bare titanium and bare anodized ones. Alumina coated titanium surfaces, which anodized in phosphoric acid, showed significantly different mineralization ratios after 21 days over other bare titanium and titanium surfaces which anodized in other electrolytes. Bare titanium was the second surface that had the highest mineralization ratio. Otherwise, titanium, which is anodized in oxalic acid electrolyte, demonstrated the lowest mineralization. No significant difference was shown between bare titanium and anodized surfaces except AAO titanium surface anodized in phosphoric acid. Currently, osteogenic activities of these cells on the genetic level are investigated by quantitative real-time polymerase chain reaction (qRT-PCR) analysis results of RUNX-2, VEGF, OPG, and osteopontin genes. Also, as a result of the activities of the genes mentioned before, Western Blot will be used for protein detection. Acknowledgment: The project is supported by The Scientific and Technological Research Council of Turkey.

Keywords: alumina, craniofacial implant, MG-63 cell line, osseointegration, oxalic acid, phosphoric acid, sulphuric acid, titanium

Procedia PDF Downloads 111
5 Luminescent Properties of Plastic Scintillator with Large Area Photonic Crystal Prepared by a Combination of Nanoimprint Lithography and Atomic Layer Deposition

Authors: Jinlu Ruan, Liang Chen, Bo Liu, Xiaoping Ouyang, Zhichao Zhu, Zhongbing Zhang, Shiyi He, Mengxuan Xu

Abstract:

Plastic scintillators play an important role in the measurement of a mixed neutron/gamma pulsed radiation, neutron radiography and pulse shape discrimination technology. In some research, these luminescent properties are necessary that photons produced by the interactions between a plastic scintillator and radiations can be detected as much as possible by the photoelectric detectors and more photons can be emitted from the scintillators along a specific direction where detectors are located. Unfortunately, a majority of these photons produced are trapped in the plastic scintillators due to the total internal reflection (TIR), because there is a significant light-trapping effect when the incident angle of internal scintillation light is larger than the critical angle. Some of these photons trapped in the scintillator may be absorbed by the scintillator itself and the others are emitted from the edges of the scintillator. This makes the light extraction of plastic scintillators very low. Moreover, only a small portion of the photons emitted from the scintillator easily can be detected by detectors effectively, because the distribution of the emission directions of this portion of photons exhibits approximate Lambertian angular profile following a cosine emission law. Therefore, enhancing the light extraction efficiency and adjusting the emission angular profile become the keys for improving the number of photons detected by the detectors. In recent years, photonic crystal structures have been covered on inorganic scintillators to enhance the light extraction efficiency and adjust the angular profile of scintillation light successfully. However, that, preparation methods of photonic crystals will deteriorate performance of plastic scintillators and even destroy the plastic scintillators, makes the investigation on preparation methods of photonic crystals for plastic scintillators and luminescent properties of plastic scintillators with photonic crystal structures inadequate. Although we have successfully made photonic crystal structures covered on the surface of plastic scintillators by a modified self-assembly technique and achieved a great enhance of light extraction efficiency without evident angular-dependence for the angular profile of scintillation light, the preparation of photonic crystal structures with large area (the diameter is larger than 6cm) and perfect periodic structure is still difficult. In this paper, large area photonic crystals on the surface of scintillators were prepared by nanoimprint lithography firstly, and then a conformal layer with material of high refractive index on the surface of photonic crystal by atomic layer deposition technique in order to enhance the stability of photonic crystal structures and increase the number of leaky modes for improving the light extraction efficiency. The luminescent properties of the plastic scintillator with photonic crystals prepared by the mentioned method are compared with those of the plastic scintillator without photonic crystal. The results indicate that the number of photons detected by detectors is increased by the enhanced light extraction efficiency and the angular profile of scintillation light exhibits evident angular-dependence for the scintillator with photonic crystals. The mentioned preparation of photonic crystals is beneficial to scintillation detection applications and lays an important technique foundation for the plastic scintillators to meet special requirements under different application backgrounds.

Keywords: angular profile, atomic layer deposition, light extraction efficiency, plastic scintillator, photonic crystal

Procedia PDF Downloads 176
4 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 263
3 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface

Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves

Abstract:

In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.

Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment

Procedia PDF Downloads 107