Search results for: vision computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2053

Search results for: vision computing

553 Symbolic Partial Differential Equations Analysis Using Mathematica

Authors: Davit Shahnazaryan, Diogo Gomes, Mher Safaryan

Abstract:

Many symbolic computations and manipulations required in the analysis of partial differential equations (PDE) or systems of PDEs are tedious and error-prone. These computations arise when determining conservation laws, entropies or integral identities, which are essential tools for the study of PDEs. Here, we discuss a new Mathematica package for the symbolic analysis of PDEs that automate multiple tasks, saving time and effort. Methodologies: During the research, we have used concepts of linear algebra and partial differential equations. We have been working on creating algorithms based on theoretical mathematics to find results mentioned below. Major Findings: Our package provides the following functionalities; finding symmetry group of different PDE systems, generation of polynomials invariant with respect to different symmetry groups; simplification of integral quantities by integration by parts and null Lagrangian cleaning, computing general forms of expressions by integration by parts; finding equivalent forms of an integral expression that are simpler or more symmetric form; determining necessary and sufficient conditions on the coefficients for the positivity of a given symbolic expression. Conclusion: Using this package, we can simplify integral identities, find conserved and dissipated quantities of time-dependent PDE or system of PDEs. Some examples in the theory of mean-field games and semiconductor equations are discussed.

Keywords: partial differential equations, symbolic computation, conserved and dissipated quantities, mathematica

Procedia PDF Downloads 163
552 Different Goals and Strategies of Smart Cities: Comparative Study between European and Asian Countries

Authors: Yountaik Leem, Sang Ho Lee

Abstract:

In this paper, different goals and the ways to reach smart cities shown in many countries during planning and implementation processes will be discussed. Each country dealt with technologies which have been embedded into space as development of ICTs (information and communication technologies) for their own purposes and by their own ways. For example, European countries tried to adapt technologies to reduce greenhouse gas emission to overcome global warming while US-based global companies focused on the way of life using ICTs such as EasyLiving of Microsoft™ and CoolTown of Hewlett-Packard™ during last decade of 20th century. In the North-East Asian countries, urban space with ICTs were developed in large scale on the viewpoint of capitalism. Ubiquitous city, first introduced in Korea which named after Marc Weiser’s concept of ubiquitous computing pursued new urban development with advanced technologies and high-tech infrastructure including wired and wireless network. Japan has developed smart cities as comprehensive and technology intensive cities which will lead other industries of the nation in the future. Not only the goals and strategies but also new directions to which smart cities are oriented also suggested at the end of the paper. Like a Finnish smart community whose slogan is ‘one more hour a day for citizens,’ recent trend is forwarding everyday lives and cultures of human beings, not capital gains nor physical urban spaces.

Keywords: smart cities, urban strategy, future direction, comparative study

Procedia PDF Downloads 262
551 Statistical Mechanical Approach in Modeling of Hybrid Solar Cells for Photovoltaic Applications

Authors: A. E. Kobryn

Abstract:

We present both descriptive and predictive modeling of structural properties of blends of PCBM or organic-inorganic hybrid perovskites of the type CH3NH3PbX3 (X=Cl, Br, I) with P3HT, P3BT or squaraine SQ2 dye sensitizer, including adsorption on TiO2 clusters having rutile (110) surface. In our study, we use a methodology that allows computing the microscopic structure of blends on the nanometer scale and getting insight on miscibility of its components at various thermodynamic conditions. The methodology is based on the integral equation theory of molecular liquids in the reference interaction site representation/model (RISM) and uses the universal force field. Input parameters for RISM, such as optimized molecular geometries and charge distribution of interaction sites, are derived with the use of the density functional theory methods. To compare the diffusivity of the PCBM in binary blends with P3HT and P3BT, respectively, the study is complemented with MD simulation. A very good agreement with experiment and the reports of alternative modeling or simulation is observed for PCBM in P3HT system. The performance of P3BT with perovskites, however, seems as expected. The calculated nanoscale morphologies of blends of P3HT, P3BT or SQ2 with perovskites, including adsorption on TiO2, are all new and serve as an instrument in rational design of organic/hybrid photovoltaics. They are used in collaboration with experts who actually make prototypes or devices for practical applications.

Keywords: multiscale theory and modeling, nanoscale morphology, organic-inorganic halide perovskites, three dimensional distribution

Procedia PDF Downloads 155
550 Improving Flash Flood Forecasting with a Bayesian Probabilistic Approach: A Case Study on the Posina Basin in Italy

Authors: Zviad Ghadua, Biswa Bhattacharya

Abstract:

The Flash Flood Guidance (FFG) provides the rainfall amount of a given duration necessary to cause flooding. The approach is based on the development of rainfall-runoff curves, which helps us to find out the rainfall amount that would cause flooding. An alternative approach, mostly experimented with Italian Alpine catchments, is based on determining threshold discharges from past events and on finding whether or not an oncoming flood has its magnitude more than some critical discharge thresholds found beforehand. Both approaches suffer from large uncertainties in forecasting flash floods as, due to the simplistic approach followed, the same rainfall amount may or may not cause flooding. This uncertainty leads to the question whether a probabilistic model is preferable over a deterministic one in forecasting flash floods. We propose the use of a Bayesian probabilistic approach in flash flood forecasting. A prior probability of flooding is derived based on historical data. Additional information, such as antecedent moisture condition (AMC) and rainfall amount over any rainfall thresholds are used in computing the likelihood of observing these conditions given a flash flood has occurred. Finally, the posterior probability of flooding is computed using the prior probability and the likelihood. The variation of the computed posterior probability with rainfall amount and AMC presents the suitability of the approach in decision making in an uncertain environment. The methodology has been applied to the Posina basin in Italy. From the promising results obtained, we can conclude that the Bayesian approach in flash flood forecasting provides more realistic forecasting over the FFG.

Keywords: flash flood, Bayesian, flash flood guidance, FFG, forecasting, Posina

Procedia PDF Downloads 136
549 Artificial Intelligence for Generative Modelling

Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta

Abstract:

As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.

Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques

Procedia PDF Downloads 149
548 A Highly Efficient Broadcast Algorithm for Computer Networks

Authors: Ganesh Nandakumaran, Mehmet Karaata

Abstract:

A wave is a distributed execution, often made up of a broadcast phase followed by a feedback phase, requiring the participation of all the system processes before a particular event called decision is taken. Wave algorithms with one initiator such as the 1-wave algorithm have been shown to be very efficient for broadcasting messages in tree networks. Extensions of this algorithm broadcasting a sequence of waves using a single initiator have been implemented in algorithms such as the m-wave algorithm. However as the network size increases, having a single initiator adversely affects the message delivery times to nodes further away from the initiator. As a remedy, broadcast waves can be allowed to be initiated by multiple initiator nodes distributed across the network to reduce the completion time of broadcasts. These waves initiated by one or more initiator processes form a collection of waves covering the entire network. Solutions to global-snapshots, distributed broadcast and various synchronization problems can be solved efficiently using waves with multiple concurrent initiators. In this paper, we propose the first stabilizing multi-wave sequence algorithm implementing waves started by multiple initiator processes such that every process in the network receives at least one sequence of broadcasts. Due to being stabilizing, the proposed algorithm can withstand transient faults and do not require initialization. We view a fault as a transient fault if it perturbs the configuration of the system but not its program.

Keywords: distributed computing, multi-node broadcast, propagation of information with feedback and cleaning (PFC), stabilization, wave algorithms

Procedia PDF Downloads 504
547 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections

Authors: Anthony D. Rhodes, Manan Goel

Abstract:

We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.

Keywords: computer vision, object segmentation, interactive segmentation, model compression

Procedia PDF Downloads 120
546 Approaches to Ethical Hacking: A Conceptual Framework for Research

Authors: Lauren Provost

Abstract:

The digital world remains increasingly vulnerable, making the development of effective cybersecurity approaches even more critical in supporting the success of the digital economy and national security. Although approaches to cybersecurity have shifted and improved in the last decade with new models, especially with cloud computing and mobility, a record number of high severity vulnerabilities were recorded in the National Institute of Standards and Technology (NIST), and its National Vulnerability Database (NVD) in 2020. This is due, in part, to the increasing complexity of cyber ecosystems. Security must be approached with a more comprehensive, multi-tool strategy that addresses the complexity of cyber ecosystems, including the human factor. Ethical hacking has emerged as such an approach: a more effective, multi-strategy, comprehensive approach to cyber security's most pressing needs, especially understanding the human factor. Research on ethical hacking, however, is limited in scope. The two main objectives of this work are to (1) provide highlights of case studies in ethical hacking, (2) provide a conceptual framework for research in ethical hacking that embraces and addresses both technical and nontechnical security measures. Recommendations include an improved conceptual framework for research centered on ethical hacking that addresses many factors and attributes of significant attacks that threaten computer security; a more robust, integrative multi-layered framework embracing the complexity of cybersecurity ecosystems.

Keywords: ethical hacking, literature review, penetration testing, social engineering

Procedia PDF Downloads 218
545 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 316
544 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov

Abstract:

The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm

Procedia PDF Downloads 165
543 Application of the Critical Decision Method for Monitoring and Improving Safety in the Construction Industry

Authors: Juan Carlos Rubio Romero, Francico Salguero Caparros, Virginia Herrera-Pérez

Abstract:

No one is in the slightest doubt about the high levels of risk involved in work in the construction industry. They are even higher in structural construction work. The Critical Decision Method (CDM) is a semi-structured interview technique that uses cognitive tests to identify the different disturbances that workers have to deal with in their work activity. At present, the vision of safety focused on daily performance and things that go well for safety and health management is facing the new paradigm known as Resilience Engineering. The aim of this study has been to describe the variability in formwork labour on concrete structures in the construction industry and, from there, to find out the resilient attitude of workers to unexpected events that they have experienced during their working lives. For this purpose, a series of semi-structured interviews were carried out with construction employees with extensive experience in formwork labour in Spain by applying the Critical Decision Method. This work has been the first application of the Critical Decision Method in the field of construction and, more specifically, in the execution of structures. The results obtained show that situations categorised as unthought-of are identified to a greater extent than potentially unexpected situations. The identification during these interviews of both expected and unexpected events provides insight into the critical decisions made and actions taken to improve resilience in daily practice in this construction work. From this study, it is clear that it is essential to gain more knowledge about the nature of the human cognitive process in work situations within complex socio-technical systems such as construction sites. This could lead to a more effective design of workplaces in the search for improved human performance.

Keywords: resilience engineering, construction industry, unthought-of situations, critical decision method

Procedia PDF Downloads 148
542 Intracranial Hypertension without CVST in Apla Syndrome: An Unique Association

Authors: Camelia Porey, Binaya Kumar Jaiswal

Abstract:

BACKGROUND: Antiphospholipid antibody (APLA) syndrome is an autoimmune disorder predisposing to thrombotic complications affecting CNS either by arterial vasooclusion or venous thrombosis. Cerebral venous sinus thrombosis (CVST) secondarily causes raised intracranial pressure (ICP). However, intracranial hypertension without evidence of CVST is a rare entity. Here we present two cases of elevated ICP with absence of identifiable CVST. CASE SUMMARY: Case 1, 28-year female had a 2 months history of holocranial headache followed by bilateral painless vision loss reaching lack of light perception over 20 days. CSF opening pressure was elevated. Fundoscopy showed bilateral grade 4 papilledema. MRI revealed a partially empty sella with bilateral optic nerve tortuosity. Idiopathic intracranial hypertension (IIH) was diagnosed. With acetazolamide, there was complete resolution of the clinical and radiological abnormalities. 5 months later she presented with acute onset right-sided hemiparesis. MRI was suggestive of acute left MCA infarct.MR venogram was normal. APLA came positive with high titres of Anticardiolipin and Beta 2 glycoprotein both IgG and IgM. Case 2, 23-year female, presented with headache and diplopia of 2 months duration. CSF pressure was elevated and Grade 3 papilledema was seen. MRI showed bilateral optic nerve hyperintensities with nerve head protrusion with normal MRV. APLA profile showed elevated beta 2 glycoprotein IgG and IgA. CONCLUSION: This is an important non thrombotic complication of APLA syndrome and requires further large-scale study for insight into the pathogenesis and early recognition to avoid future complications.

Keywords: APLA syndrome, idiopathic intracranial hypertension, MR venogram, papilledema

Procedia PDF Downloads 177
541 Study of Superconducting Patch Printed on Electric-Magnetic Substrates Materials

Authors: Fortaki Tarek, S. Bedra

Abstract:

In this paper, the effects of both uniaxial anisotropy in the substrate and high Tc superconducting patch on the resonant frequency, half-power bandwidth, and radiation patterns are investigated using an electric field integral equation and the spectral domain Green’s function. The analysis has been based on a full electromagnetic wave model with London’s equations and the Gorter-Casimir two-fluid model has been improved to investigate the resonant and radiation characteristics of high Tc superconducting rectangular microstrip patch in the case where the patch is printed on electric-magnetic uniaxially anisotropic substrate materials. The stationary phase technique has been used for computing the radiation electric field. The obtained results demonstrate a considerable improvement in the half-power bandwidth, of the rectangular microstrip patch, by using a superconductor patch instead of a perfect conductor one. Further results show that high Tc superconducting rectangular microstrip patch on the uniaxial substrate with properly selected electric and magnetic anisotropy ratios is more advantageous than the one on the isotropic substrate by exhibiting wider bandwidth and radiation characteristic. This behavior agrees with that discovered experimentally for superconducting patches on isotropic substrates. The calculated results have been compared with measured one available in the literature and excellent agreement has been found.

Keywords: high Tc superconducting microstrip patch, electric-magnetic anisotropic substrate, Galerkin method, surface complex impedance with boundary conditions, radiation patterns

Procedia PDF Downloads 444
540 Turin, from Factory City to Talents Power Player: The Role of Private Philanthropy Agents of Innovation in the Revolution of Human Capital Market in the Contemporary Socio-Urban Scenario

Authors: Renato Roda

Abstract:

With the emergence of the so-called 'Knowledge Society', the implementation of policies to attract, grow and retain talents, in an academic context as well, has become critical –both in the perspective of didactics and research and as far as administration and institutional management are concerned. At the same time, the contemporary philanthropic entities/organizations, which are evolving from traditional types of social support towards new styles of aid, envisaged to go beyond mere monetary donations, face the challenge of brand-new forms of complexity in supporting such specific dynamics of the global human capital market. In this sense, it becomes unavoidable for the philanthropic foundation, while carrying out their daily charitable tasks, to resort to innovative ways to facilitate the acquisition and the promotion of talents by academic and research institutions. In order to deepen such a specific perspective, this paper features the case of Turin, former 'factory city' of Italy’s North West, headquarters -and main reference territory- of Italy’s largest and richest private formerly bank-based philanthropic foundation, the Fondazione Compagnia di San Paolo. While it was assessed and classified as 'medium' in the city Global Talent Competitiveness Index (GTCI) of 2020, Turin has nevertheless acquired over the past months status of impact laboratory for a whole series of innovation strategies in the competition for the acquisition of excellence human capital. Leading actors of this new city vision are the foundations with their specifically adjusted financial engagement and a consistent role of stimulus towards innovation for research and education institutions.

Keywords: human capital, post-Fordism, private foundation, war on talents

Procedia PDF Downloads 171
539 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning

Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho

Abstract:

Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.

Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning

Procedia PDF Downloads 96
538 Amblyopia and Eccentric Fixation

Authors: Kristine Kalnica-Dorosenko, Aiga Svede

Abstract:

Amblyopia or 'lazy eye' is impaired or dim vision without obvious defect or change in the eye. It is often associated with abnormal visual experience, most commonly strabismus, anisometropia or both, and form deprivation. The main task of amblyopia treatment is to ameliorate etiological factors to create a clear retinal image and, to ensure the participation of the amblyopic eye in the visual process. The treatment of amblyopia and eccentric fixation is usually associated with problems in the therapy. Eccentric fixation is present in around 44% of all patients with amblyopia and in 30% of patients with strabismic amblyopia. In Latvia, amblyopia is carefully treated in various clinics, but eccentricity diagnosis is relatively rare. Conflict which has developed relating to the relationship between the visual disorder and the degree of eccentric fixation in amblyopia should to be rethoughted, because it has an important bearing on the cause and treatment of amblyopia, and the role of the eccentric fixation in this case. Visuoscopy is the most frequently used method for determination of eccentric fixation. With traditional visuoscopy, a fixation target is projected onto the patient retina, and the examiner asks to look straight directly at the center of the target. An optometrist then observes the point on the macula used for fixation. This objective test provides clinicians with direct observation of the fixation point of the eye. It requires patients to voluntarily fixate the target and assumes the foveal reflex accurately demarcates the center of the foveal pit. In the end, by having a very simple method to evaluate fixation, it is possible to indirectly evaluate treatment improvement, as eccentric fixation is always associated with reduced visual acuity. So, one may expect that if eccentric fixation in amlyopic eye is found with visuoscopy, then visual acuity should be less than 1.0 (in decimal units). With occlusion or another amblyopia therapy, one would expect both visual acuity and fixation to improve simultaneously, that is fixation would become more central. Consequently, improvement in fixation pattern by treatment is an indirect measurement of improvement of visual acuity. Evaluation of eccentric fixation in the child may be helpful in identifying amblyopia in children prior to measurement of visual acuity. This is very important because the earlier amblyopia is diagnosed – the better the chance of improving visual acuity.

Keywords: amblyopia, eccentric fixation, visual acuity, visuoscopy

Procedia PDF Downloads 158
537 Design of Low-Emission Catalytically Stabilized Combustion Chamber Concept

Authors: Annapurna Basavaraju, Andreas Marn, Franz Heitmeir

Abstract:

The Advisory Council for Aeronautics Research in Europe (ACARE) is cognizant for the overall reduction of NOx emissions by 80% in its vision 2020. Moreover small turbo engines have higher fuel specific emissions compared to large engines due to their limited combustion chamber size. In order to fulfill these requirements, novel combustion concepts are essential. This motivates to carry out the research on the current state of art, catalytic stabilized combustion chamber using hydrogen in small jet engines which are designed and investigated both numerically and experimentally during this project. Catalytic combustion concepts can also be adopted for low caloric fuels and are therefore not constrained to only hydrogen. However, hydrogen has high heating value and has the major advantage of producing only the nitrogen oxides as pollutants during the combustion, thus eliminating the interest on other emissions such as Carbon monoxides etc. In the present work, the combustion chamber is designed based on the ‘Rich catalytic Lean burn’ concept. The experiments are conducted for the characteristic operating range of an existing engine. This engine has been tested successfully at Institute of Thermal Turbomachinery and Machine Dynamics (ITTM), Technical University Graz. One of the facts that the efficient combustion is a result of proper mixing of fuel-air mixture, considerable significance is given to the selection of appropriate mixer. This led to the design of three diverse configurations of mixers and is investigated experimentally and numerically. Subsequently the best mixer would be equipped in the main combustion chamber and used throughout the experimentation. Furthermore, temperatures and pressures would be recorded at various locations inside the combustion chamber and the exhaust emissions will also be analyzed. The instrumented combustion chamber would be inspected at the engine relevant inlet conditions for nine different sets of catalysts at the Hot Flow Test Facility (HFTF) of the institute.

Keywords: catalytic combustion, gas turbine, hydrogen, mixer, NOx emissions

Procedia PDF Downloads 305
536 Automatic Near-Infrared Image Colorization Using Synthetic Images

Authors: Yoganathan Karthik, Guhanathan Poravi

Abstract:

Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.

Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data

Procedia PDF Downloads 43
535 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 182
534 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 489
533 Conceptual Understanding for the Adoption of Energy Assessment Methods in the United Arab Emirates Built Environment

Authors: Amna I. Shibeika, Batoul Y. Hittini, Tasneem B. Abd Bakri

Abstract:

Regulation and integration of public policy, economy, insurance industry, education, and construction stakeholders are the main contributors to achieve sustainable development. Building environmental assessment methods were introduced in the field to address issues such as global warming and conservation of natural resources. In the UAE, Estidama framework with its associated Pearl Building Rating System (PBRS) has been introduced in 2010 to address and spread sustainability practices within the country’s fast-growing built environment. Based on literature review of relevant studies investigating different project characteristics that influence sustainability outcomes, this paper presents a conceptual framework for understanding the adoption of PBRS in UAE projects. The framework also draws on Diffusion of Innovations theory to address the questions of how the assessment method is chosen in the first place and what is the impact of PBRS on the multi-disciplinary design and construction processes. The study highlights the mandatory nature of the adoption of PBRS for government buildings as well as imbedding Estidama principles within Abu Dhabi building codes as key factors for raising awareness about sustainable practices. Moreover, several project-related elements are addressed to understand their relationship with the adoption process, including project team collaboration; communication and coordination; levels of commitment and engagement; and the involvement of key actors as sustainability champions. This conceptualization of the adoption of PBRS in UAE projects contributes to the growing literature on the adoption of energy assessment tools and addresses the UAE vision is to be at the forefront of innovative sustainable development by 2021.

Keywords: adoption, building assessment, design management, innovation, sustainability

Procedia PDF Downloads 147
532 The Time-Frequency Domain Reflection Method for Aircraft Cable Defects Localization

Authors: Reza Rezaeipour Honarmandzad

Abstract:

This paper introduces an aircraft cable fault detection and location method in light of TFDR keeping in mind the end goal to recognize the intermittent faults adequately and to adapt to the serial and after-connector issues being hard to be distinguished in time domain reflection. In this strategy, the correlation function of reflected and reference signal is used to recognize and find the airplane fault as per the qualities of reflected and reference signal in time-frequency domain, so the hit rate of distinguishing and finding intermittent faults can be enhanced adequately. In the work process, the reflected signal is interfered by the noise and false caution happens frequently, so the threshold de-noising technique in light of wavelet decomposition is used to diminish the noise interference and lessen the shortcoming alert rate. At that point the time-frequency cross connection capacity of the reference signal and the reflected signal based on Wigner-Ville appropriation is figured so as to find the issue position. Finally, LabVIEW is connected to execute operation and control interface, the primary capacity of which is to connect and control MATLAB and LABSQL. Using the solid computing capacity and the bottomless capacity library of MATLAB, the signal processing turn to be effortlessly acknowledged, in addition LabVIEW help the framework to be more dependable and upgraded effectively.

Keywords: aircraft cable, fault location, TFDR, LabVIEW

Procedia PDF Downloads 476
531 Performance Analysis of High Temperature Heat Pump Cycle for Industrial Process

Authors: Seon Tae Kim, Robert Hegner, Goksel Ozuylasi, Panagiotis Stathopoulos, Eberhard Nicke

Abstract:

High-temperature heat pumps (HTHP) that can supply heat at temperatures above 200°C can enhance the energy efficiency of industrial processes and reduce the CO₂ emissions connected with the heat supply of these processes. In the current work, the thermodynamic performance of 3 different vapor compression cycles, which use R-718 (water) as a working medium, have been evaluated by using a commercial process simulation tool (EBSILON Professional). All considered cycles use two-stage vapor compression with intercooling between stages. The main aim of the study is to compare different intercooling strategies and study possible heat recovery scenarios within the intercooling process. This comparison has been carried out by computing the coefficient of performance (COP), the heat supply temperature level, and the respective mass flow rate of water for all cycle architectures. With increasing temperature difference between the heat source and heat sink, ∆T, the COP values decreased as expected, and the highest COP value was found for the cycle configurations where both compressors have the same pressure ratio (PR). The investigation on the HTHP capacities with optimized PR and exergy analysis has also been carried out. The internal heat exchanger cycle with the inward direction of secondary flow (IHX-in) showed a higher temperature level and exergy efficiency compared to other cycles. Moreover, the available operating range was estimated by considering mechanical limitations.

Keywords: high temperature heat pump, industrial process, vapor compression cycle, R-718 (water), thermodynamic analysis

Procedia PDF Downloads 149
530 Hybrid Renewable Power Systems

Authors: Salman Al-Alyani

Abstract:

In line with the Kingdom’s Vision 2030, the Saudi Green initiative was announced aimed at reducing carbon emissions by more than 4% of the global contribution. The initiative included plans to generate 50% of its energy from renewables by 2030. The geographical location of Saudi Arabia makes it among the best countries in terms of solar irradiation and has good wind resources in many areas across the Kingdom. Saudi Arabia is a wide country and has many remote locations where it is not economically feasible to connect those loads to the national grid. With the improvement of battery innovation and reduction in cost, different renewable technologies (primarily wind and solar) can be integrated to meet the need for energy in a more effective and cost-effective way. Saudi Arabia is famous for high solar irradiations in which solar power generation can extend up to six (6) hours per day (25% capacity factor) in some locations. However, the net present value (NPV) falls down to negative in some locations due to distance and high installation costs. Wind generation in Saudi Arabia is a promising technology. Hybrid renewable generation will increase the net present value and lower the payback time due to additional energy generated by wind. The infrastructure of the power system can be capitalized to contain solar generation and wind generation feeding the inverter, controller, and load. Storage systems can be added to support the hours that have an absence of wind or solar energy. Also, the smart controller that can help integrate various renewable technologies primarily wind and solar, to meet demand considering load characteristics. It could be scalable for grid or off-grid applications. The objective of this paper is to study the feasibility of introducing a hybrid renewable system in remote locations and the concept for the development of a smart controller.

Keywords: battery storage systems, hybrid power generation, solar energy, wind energy

Procedia PDF Downloads 178
529 Quality Analysis of Vegetables Through Image Processing

Authors: Abdul Khalique Baloch, Ali Okatan

Abstract:

The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.

Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria

Procedia PDF Downloads 70
528 The Perception on 21st Century Skills of Nursing Instructors and Nursing Students at Boromarajonani College of Nursing, Chonburi

Authors: Kamolrat Turner, Somporn Rakkwamsuk, Ladda Leungratanamart

Abstract:

The aim of this descriptive study was to determine the perception of 21st century skills among nursing professors and nursing students at Boromarajonani College of Nursing, Chonburi. A total of 38 nursing professors and 75 second year nursing students took part in the study. Data were collected by 21st century skills questionnaires comprised of 63 items. Descriptive statistics were used to describe the findings. The results have shown that the overall mean scores of the perception of nursing professors on 21st century skills were at a high level. The highest mean scores were recorded for computing and ICT literacy, and career and leaning skills. The lowest mean scores were recorded for reading and writing and mathematics. The overall mean scores on perception of nursing students on 21st century skills were at a high level. The highest mean scores were recorded for computer and ICT literacy, for which the highest item mean scores were recorded for competency on computer programs. The lowest mean scores were recorded for the reading, writing, and mathematics components, in which the highest item mean score was reading Thai correctly, and the lowest item mean score was English reading and translate to other correctly. The findings from this study have shown that the perceptions of nursing professors were consistent with those of nursing students. Moreover, any activities aiming to raise capacity on English reading and translate information to others should be taken into the consideration.

Keywords: 21st century skills, perception, nursing instructor, nursing student

Procedia PDF Downloads 316
527 Characteristic Sentence Stems in Academic English Texts: Definition, Identification, and Extraction

Authors: Jingjie Li, Wenjie Hu

Abstract:

Phraseological units in academic English texts have been a central focus in recent corpus linguistic research. A wide variety of phraseological units have been explored, including collocations, chunks, lexical bundles, patterns, semantic sequences, etc. This paper describes a special category of clause-level phraseological units, namely, Characteristic Sentence Stems (CSSs), with a view to describing their defining criteria and extraction method. CSSs are contiguous lexico-grammatical sequences which contain a subject-predicate structure and which are frame expressions characteristic of academic writing. The extraction of CSSs consists of six steps: Part-of-speech tagging, n-gram segmentation, structure identification, significance of occurrence calculation, text range calculation, and overlapping sequence reduction. Significance of occurrence calculation is the crux of this study. It includes the computing of both the internal association and the boundary independence of a CSS and tests the occurring significance of the CSS from both inside and outside perspectives. A new normalization algorithm is also introduced into the calculation of LocalMaxs for reducing overlapping sequences. It is argued that many sentence stems are so recurrent in academic texts that the most typical of them have become the habitual ways of making meaning in academic writing. Therefore, studies of CSSs could have potential implications and reference value for academic discourse analysis, English for Academic Purposes (EAP) teaching and writing.

Keywords: characteristic sentence stem, extraction method, phraseological unit, the statistical measure

Procedia PDF Downloads 166
526 Cloud Enterprise Application Provider Selection Model for the Small and Medium Enterprise: A Pilot Study

Authors: Rowland R. Ogunrinde, Yusmadi Y. Jusoh, Noraini Che Pa, Wan Nurhayati W. Rahman, Azizol B. Abdullah

Abstract:

Enterprise Applications (EAs) aid the organizations achieve operational excellence and competitive advantage. Over time, most Small and Medium Enterprises (SMEs), which are known to be the major drivers of most thriving global economies, use the costly on-premise versions of these applications thereby making business difficult to competitively thrive in the same market environment with their large enterprise counterparts. The advent of cloud computing presents the SMEs an affordable offer and great opportunities as such EAs can be cloud-hosted and rented on a pay-per-use basis which does not require huge initial capital. However, as there are numerous Cloud Service Providers (CSPs) offering EAs as Software-as-a-Service (SaaS), there is a challenge of choosing a suitable provider with Quality of Service (QoS) that meet the organizations’ customized requirements. The proposed model takes care of that and goes a step further to select the most affordable among a selected few of the CSPs. In the earlier stage, before developing the instrument and conducting the pilot test, the researchers conducted a structured interview with three experts to validate the proposed model. In conclusion, the validity and reliability of the instrument were tested through experts, typical respondents, and analyzed with SPSS 22. Results confirmed the validity of the proposed model and the validity and reliability of the instrument.

Keywords: cloud service provider, enterprise application, quality of service, selection criteria, small and medium enterprise

Procedia PDF Downloads 179
525 Impact of Different Fuel Inlet Diameters onto the NOx Emissions in a Hydrogen Combustor

Authors: Annapurna Basavaraju, Arianna Mastrodonato, Franz Heitmeir

Abstract:

The Advisory Council for Aeronautics Research in Europe (ACARE) is creating awareness for the overall reduction of NOx emissions by 80% in its vision 2020. Hence this promotes the researchers to work on novel technologies, one such technology is the use of alternative fuels. Among these fuels hydrogen is of interest due to its one and only significant pollutant NOx. The influence of NOx formation due to hydrogen combustion depends on various parameters such as air pressure, inlet air temperature, air to fuel jet momentum ratio etc. Appropriately, this research is motivated to investigate the impact of the air to fuel jet momentum ratio onto the NOx formation in a hydrogen combustion chamber for aircraft engines. The air to jet fuel momentum is defined as the ratio of impulse/momentum of air with respect to the momentum of fuel. The experiments were performed in an existing combustion chamber that has been previously tested for methane. Premix of the reactants has not been considered due to the high reactivity of the hydrogen and high risk of a flashback. In order to create a less rich zone of reaction at the burner and to decrease the emissions, a forced internal recirculation flow has been achieved by integrating a plate similar to honeycomb structure, suitable to the geometry of the liner. The liner has been provided with an external cooling system to avoid the increase of local temperatures and in turn the reaction rate of the NOx formation. The injected air has been preheated to aim at so called flameless combustion. The air to fuel jet momentum ratio has been inspected by changing the area of fuel inlets and keeping the number of fuel inlets constant in order to alter the fuel jet momentum, thus maintaining the homogeneity of the flow. Within this analysis, promising results for a flameless combustion have been achieved. For a constant number of fuel inlets, it was seen that the reduction of the fuel inlet diameter resulted in decrease of air to fuel jet momentum ratio in turn lowering the NOx emissions.

Keywords: combustion chamber, hydrogen, jet momentum, NOx emission

Procedia PDF Downloads 292
524 Theoretical Analysis of the Optical and Solid State Properties of Thin Film

Authors: E. I. Ugwu

Abstract:

Theoretical analysis of the optical and Solid State properties of ZnS thin film using beam propagation technique in which a scalar wave is propagated through the material thin film deposited on a substrate with the assumption that the dielectric medium is section into a homogenous reference dielectric constant term, and a perturbed dielectric term, representing the deposited thin film medium is presented in this work. These two terms, constitute arbitrary complex dielectric function that describes dielectric perturbation imposed by the medium of for the system. This is substituted into a defined scalar wave equation in which the appropriate Green’s Function was defined on it and solved using series technique. The green’s value obtained from Green’s Function was used in Dyson’s and Lippmann Schwinger equations in conjunction with Born approximation method in computing the propagated field for different input regions of field wavelength during which the influence of the dielectric constants and mesh size of the thin film on the propagating field were depicted. The results obtained from the computed field were used in turn to generate the data that were used to compute the band gaps, solid state and optical properties of the thin film such as reflectance, Transmittance and reflectance with which the band gap obtained was found to be in close approximate to that of experimental value.

Keywords: scalar wave, optical and solid state properties, thin film, dielectric medium, perturbation, Lippmann Schwinger equations, Green’s Function, propagation

Procedia PDF Downloads 438