Search results for: neural interface
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3198

Search results for: neural interface

648 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 153
647 Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation

Authors: Kishore Kumar R., Kaustav Sengupta, Shalini Sood Sehgal, Poornima Santhanam

Abstract:

Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences.

Keywords: colour analysis in t-shirts, convolutional neural network, encoder-decoder, k-means clustering, semantic segmentation, U-Net model

Procedia PDF Downloads 114
646 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 315
645 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing

Procedia PDF Downloads 185
644 Monitoring the Change of Padma River Bank at Faridpur, Bangladesh Using Remote Sensing Approach

Authors: Ilme Faridatul, Bo Wu

Abstract:

Bangladesh is often called as a motherland of rivers. It contains about 700 rivers among all these the Padma River is one of the largest rivers of Bangladesh. The change of river bank and erosion has become a common environmental natural hazard in Bangladesh. The river banks are under intense pressure from natural processes such as erosion and accretion as well as anthropogenic processes such as urban growth and pollution. The Padma River is flowing along ten districts of Bangladesh among all these Faridpur district is most vulnerable to river bank erosion. The severity of the river erosion is so high that each year a thousand of populations become homeless and lose their agricultural lands. Though the Faridpur district is most vulnerable to river bank erosion no specific research has been conducted to identify the changing pattern of river bank along this district. The outcome of the research may serve as guidance to prepare river bank monitoring program and management. This research has utilized integrated techniques of remote sensing and geographic information system to monitor the changes from 1995 to 2015 at Faridpur district. To discriminate the land water interface Modified Normalized Difference Water Index (MNDWI) algorithm is applied and on screen digitization approach is used over MNDWI images of 1995, 2002 and 2015 for river bank line extraction. The extent of changes in the river bank along Faridpur district is estimated through overlaying the digitized maps of all three years. The river bank lines are highlighted to infer the erosion and accretion and the changes are calculated. The result shows that the middle of the river is gaining land through sedimentation and the both side river bank is shifting causing severe erosion that consequently resulting the loss of farmland and homestead. Over the study period from 1995 to 2015 it witnessed huge erosion and accretion that played an active role in the changes of the river bank.

Keywords: river bank, erosion and accretion, change monitoring, remote sensing

Procedia PDF Downloads 327
643 Synthesis of ZnFe₂O₄-AC/CeMOF for Improvement Photodegradation of Textile Dyes Under Visible-light: Optimization and Statistical Study

Authors: Esraa Mohamed El-Fawal

Abstract:

A facile solvothermal procedure was applied to fabricate zinc ferrite nanoparticles (ZnFe₂O₄ NPs). Activated carbon (AC) derived from peanut shells is synthesized using a microwave through the chemical activation method. The ZnFe₂O₄-AC composite is then mixed with a cerium-based metal-organic framework (CeMOF) by solid-state adding to formulate ZnFe₂O₄-AC/CeMOF composite. The synthesized photo materials were tested by scanning/transmission electron microscope (SEM/TEM), Photoluminescence (PL), (XRD) X-Ray diffraction, (FTIR) Fourier transform infrared, (UV-Vis/DRS) ultraviolet-visible/diffuse reflectance spectroscopy. The prepared ZnFe₂O₄-AC/CeMOFphotomaterial shows significantly boosted efficiency for photodegradation of methyl orange /methylene blue (MO/MB) compared with the pristine ZnFe₂O₄ and ZnFe₂O₄-AC composite under the irradiation of visible-light. The favorable ZnFe₂O₄-AC/CeMOFphotocatalyst displays the highest photocatalytic degradation efficiency of MB/MO (R: 91.5-88.6%, consecutively) compared with the other as-prepared materials after 30 min of visible-light irradiation. The apparent reaction rate K: 1.94-1.31 min-1 is also calculated. The boosted photocatalytic proficiency is ascribed to the heterojunction at the interface of prepared photo material that assists the separation of the charge carriers. To reach optimization, statistical analysis using response surface methodology was applied. The effect of independent parameters (such as A (pH), B (irradiation time), and (c) initial pollutants concentration on the response function (%)photodegradation of MB/MO dyes (as examples of azodyes) was investigated via using central composite design. At the optimum condition, the photodegradation efficiency (%) of the MB/MO is 99.8-97.8%, respectively. ZnFe2O₄-AC/CeMOF hybrid reveals good stability over four consecutive cycles.

Keywords: azo-dyes, photo-catalysis, zinc ferrite, response surface methodology

Procedia PDF Downloads 172
642 Multimodal Sentiment Analysis With Web Based Application

Authors: Shreyansh Singh, Afroz Ahmed

Abstract:

Sentiment Analysis intends to naturally reveal the hidden mentality that we hold towards an entity. The total of this assumption over a populace addresses sentiment surveying and has various applications. Current text-based sentiment analysis depends on the development of word embeddings and Machine Learning models that take in conclusion from enormous text corpora. Sentiment Analysis from text is presently generally utilized for consumer loyalty appraisal and brand insight investigation. With the expansion of online media, multimodal assessment investigation is set to carry new freedoms with the appearance of integral information streams for improving and going past text-based feeling examination using the new transforms methods. Since supposition can be distinguished through compelling follows it leaves, like facial and vocal presentations, multimodal opinion investigation offers good roads for examining facial and vocal articulations notwithstanding the record or printed content. These methodologies use the Recurrent Neural Networks (RNNs) with the LSTM modes to increase their performance. In this study, we characterize feeling and the issue of multimodal assessment investigation and audit ongoing advancements in multimodal notion examination in various spaces, including spoken surveys, pictures, video websites, human-machine, and human-human connections. Difficulties and chances of this arising field are additionally examined, promoting our theory that multimodal feeling investigation holds critical undiscovered potential.

Keywords: sentiment analysis, RNN, LSTM, word embeddings

Procedia PDF Downloads 124
641 Characterization of Electrical Transport across Ultra-Thin SrTiO₃ and BaTiO₃ Barriers in Tunnel Junctions

Authors: Henry Navarro, Martin Sirena, Nestor Haberkorn

Abstract:

We report the electrical transport through voltage-current curves (I-V) in tunnels junction GdBa₂Cu₃O₇-d/ insulator/ GdBa₂Cu₃O₇-d, and Nb/insulator/ GdBa₂Cu₃O₇-d is analyzed using a conducting atomic force microscope (CAFM) at room temperature. The measurements were obtained on tunnel junctions with different areas (900 μm², 400 μm² and 100 μm²). Trilayers with GdBa₂Cu₃O₇-d (GBCO) as the bottom electrode, SrTiO₃ (STO) or BaTiO₃ (BTO) as the insulator barrier (thicknesses between 1.6 nm and 4 nm), and GBCO or Nb as the top electrode were grown by DC sputtering on (100) SrTiO₃ substrates. For STO and BTO barriers, asymmetric IV curves at positive and negative polarization can be obtained using electrodes with different work function. The main difference is that the BTO is a ferroelectric material, while in the STO the ferroelectricity can be produced by stress or deformation at the interfaces. In addition, hysteretic IV curves are obtained for BTO barriers, which can be ascribed to a combined effect of the FE reversal switching polarization and an oxygen vacancy migration. For GBCO/ BTO/ GBCO heterostructures, the IV curves correspond to that expected for asymmetric interfaces, which indicates that the disorder affects differently the properties at the bottom and top interfaces. Our results show the role of the interface disorder on the electrical transport of conducting/ insulator/ conduction heterostructures, which is relevant for different applications, going from resistive switching memories (at room temperature) to Josephson junctions (at low temperatures). The superconducting transition of the GBCO electrode was characterized by electrical transport using the 4-prong configuration with low density of topological defects and with Tc over liquid N₂ can be obtained for thicknesses of 16 nm, our results demonstrate that GBCO films with an average root-mean-square (RMS) smaller than 1 nm and areas (up 100 um²) free of 3-D topological defects can be obtained.

Keywords: thin film, sputtering, conductive atomic force microscopy, tunnel junctions

Procedia PDF Downloads 156
640 Fake News Detection for Korean News Using Machine Learning Techniques

Authors: Tae-Uk Yun, Pullip Chung, Kee-Young Kwahk, Hyunchul Ahn

Abstract:

Fake news is defined as the news articles that are intentionally and verifiably false, and could mislead readers. Spread of fake news may provoke anxiety, chaos, fear, or irrational decisions of the public. Thus, detecting fake news and preventing its spread has become very important issue in our society. However, due to the huge amount of fake news produced every day, it is almost impossible to identify it by a human. Under this context, researchers have tried to develop automated fake news detection using machine learning techniques over the past years. But, there have been no prior studies proposed an automated fake news detection method for Korean news to our best knowledge. In this study, we aim to detect Korean fake news using text mining and machine learning techniques. Our proposed method consists of two steps. In the first step, the news contents to be analyzed is convert to quantified values using various text mining techniques (topic modeling, TF-IDF, and so on). After that, in step 2, classifiers are trained using the values produced in step 1. As the classifiers, machine learning techniques such as logistic regression, backpropagation network, support vector machine, and deep neural network can be applied. To validate the effectiveness of the proposed method, we collected about 200 short Korean news from Seoul National University’s FactCheck. which provides with detailed analysis reports from 20 media outlets and links to source documents for each case. Using this dataset, we will identify which text features are important as well as which classifiers are effective in detecting Korean fake news.

Keywords: fake news detection, Korean news, machine learning, text mining

Procedia PDF Downloads 277
639 The Curvature of Bending Analysis and Motion of Soft Robotic Fingers by Full 3D Printing with MC-Cells Technique for Hand Rehabilitation

Authors: Chaiyawat Musikapan, Ratchatin Chancharoen, Saknan Bongsebandhu-Phubhakdi

Abstract:

For many recent years, soft robotic fingers were used for supporting the patients who had survived the neurological diseases that resulted in muscular disorders and neural network damages, such as stroke and Parkinson’s disease, and inflammatory symptoms such as De Quervain and trigger finger. Generally, the major hand function is significant to manipulate objects in activities of daily living (ADL). In this work, we proposed the model of soft actuator that manufactured by full 3D printing without the molding process and one material for use. Furthermore, we designed the model with a technique of multi cavitation cells (MC-Cells). Then, we demonstrated the curvature bending, fluidic pressure and force that generated to the model for assistive finger flexor and hand grasping. Also, the soft actuators were characterized in mathematics solving by the length of chord and arc length. In addition, we used an adaptive push-button switch machine to measure the force in our experiment. Consequently, we evaluated biomechanics efficiency by the range of motion (ROM) that affected to metacarpophalangeal joint (MCP), proximal interphalangeal joint (PIP) and distal interphalangeal joint (DIP). Finally, the model achieved to exhibit the corresponding fluidic pressure with force and ROM to assist the finger flexor and hand grasping.

Keywords: biomechanics efficiency, curvature bending, hand functional assistance, multi cavitation cells (MC-Cells), range of motion (ROM)

Procedia PDF Downloads 265
638 Refined Edge Detection Network

Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni

Abstract:

Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.

Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone

Procedia PDF Downloads 105
637 Rights, Differences and Inclusion: The Role of Transdisciplinary Approach in the Education for Diversity

Authors: Ana Campina, Maria Manuela Magalhaes, Eusebio André Machado, Cristina Costa-Lobo

Abstract:

Inclusive school advocates respect for differences, for equal opportunities and for a quality education for all, including for students with special educational needs. In the pursuit of educational equity, guaranteeing equality in access and results, it becomes the responsibility of the school to recognize students' needs, adapting to the various styles and rhythms of learning, ensuring the adequacy of curricula, strategies and resources, materials and humans. This paper presents a set of theoretical reflections in the disciplinary interface between legal and education sciences, school administration and management, with the aim of understand the real inclusion characteristics in a balance with the inclusion policies and the need(s) of an education for Human Rights, especially for diversity. Considering the actual social complexity but the important education instruments and strategies, mostly patented in the policies, this paper aims expose the existing contexts opposed to the laws, policies and inclusion educational needs. More than a single study, this research aims to develop a map of the reality and the guidelines to implement the action. The results point to the usefulness and pertinence of a school in which educational managers, teachers, parents, and students, are involved in the creation, implementation and monitoring of flexible curricula and adapted to the educational needs of students, promoting a collaborative work among teachers. We are then faced with a scenario that points to the need to reflect on the legislation and curricular management of inclusive classes and to operationalize the processes of elaboration of curricular adaptations and differentiation in the classroom. The transdisciplinary is a pedagogic and social education perfect approach using the Human Rights binomio – teaching and learning – supported by the inclusion laws according to the realistic needs for an effective successful society construction.

Keywords: rights, transdisciplinary, inclusion policies, education for diversity

Procedia PDF Downloads 393
636 Simulation Modelling of the Transmission of Concentrated Solar Radiation through Optical Fibres to Thermal Application

Authors: M. Rahou, A. J. Andrews, G. Rosengarten

Abstract:

One of the main challenges in high-temperature solar thermal applications transfer concentrated solar radiation to the load with minimum energy loss and maximum overall efficiency. The use of a solar concentrator in conjunction with bundled optical fibres has potential advantages in terms of transmission energy efficiency, technical feasibility and cost-effectiveness compared to a conventional heat transfer system employing heat exchangers and a heat transfer fluid. In this paper, a theoretical and computer simulation method is described to estimate the net solar radiation transmission from a solar concentrator into and through optical fibres to a thermal application at the end of the fibres over distances of up to 100 m. A key input to the simulation is the angular distribution of radiation intensity at each point across the aperture plane of the optical fibre. This distribution depends on the optical properties of the solar concentrator, in this case, a parabolic mirror with a small secondary mirror with a common focal point and a point-focus Fresnel lens to give a collimated beam that pass into the optical fibre bundle. Since solar radiation comprises a broad band of wavelengths with very limited spatial coherence over the full range of spectrum only ray tracing models absorption within the fibre and reflections at the interface between core and cladding is employed, assuming no interference between rays. The intensity of the radiation across the exit plane of the fibre is found by integrating across all directions and wavelengths. Results of applying the simulation model to a parabolic concentrator and point-focus Fresnel lens with typical optical fibre bundle will be reported, to show how the energy transmission varies with the length of fibre.

Keywords: concentrated radiation, fibre bundle, parabolic dish, fresnel lens, transmission

Procedia PDF Downloads 570
635 Micro-Scale Digital Image Correlation-Driven Finite Element Simulations of Deformation and Damage Initiation in Advanced High Strength Steels

Authors: Asim Alsharif, Christophe Pinna, Hassan Ghadbeigi

Abstract:

The development of next-generation advanced high strength steels (AHSS) used in the automotive industry requires a better understanding of local deformation and damage development at the scale of their microstructures. This work is focused on dual-phase DP1000 steels and involves micro-mechanical tensile testing inside a scanning electron microscope (SEM) combined with digital image correlation (DIC) to quantify the heterogeneity of deformation in both ferrite and martensite and its evolution up to fracture. Natural features of the microstructure are used for the correlation carried out using Davis LaVision software. Strain localization is observed in both phases with tensile strain values up to 130% and 110% recorded in ferrite and martensite respectively just before final fracture. Damage initiation sites have been observed during deformation in martensite but could not be correlated to local strain values. A finite element (FE) model of the microstructure has then been developed using Abaqus to map stress distributions over representative areas of the microstructure by forcing the model to deform as in the experiment using DIC-measured displacement maps as boundary conditions. A MATLAB code has been developed to automatically mesh the microstructure from SEM images and to map displacement vectors from DIC onto the FE mesh. Results show a correlation of damage initiation at the interface between ferrite and martensite with local principal stress values of about 1700MPa in the martensite phase. Damage in ferrite is now being investigated, and results are expected to bring new insight into damage development in DP steels.

Keywords: advanced high strength steels, digital image correlation, finite element modelling, micro-mechanical testing

Procedia PDF Downloads 149
634 Visualization Tool for EEG Signal Segmentation

Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh

Abstract:

This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.

Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation

Procedia PDF Downloads 404
633 Liquid-Liquid Plug Flow Characteristics in Microchannel with T-Junction

Authors: Anna Yagodnitsyna, Alexander Kovalev, Artur Bilsky

Abstract:

The efficiency of certain technological processes in two-phase microfluidics such as emulsion production, nanomaterial synthesis, nitration, extraction processes etc. depends on two-phase flow regimes in microchannels. For practical application in chemistry and biochemistry it is very important to predict the expected flow pattern for a large variety of fluids and channel geometries. In the case of immiscible liquids, the plug flow is a typical and optimal regime for chemical reactions and needs to be predicted by empirical data or correlations. In this work flow patterns of immiscible liquid-liquid flow in a rectangular microchannel with T-junction are investigated. Three liquid-liquid flow systems are considered, viz. kerosene – water, paraffin oil – water and castor oil – paraffin oil. Different flow patterns such as parallel flow, slug flow, plug flow, dispersed (droplet) flow, and rivulet flow are observed for different velocity ratios. New flow pattern of the parallel flow with steady wavy interface (serpentine flow) has been found. It is shown that flow pattern maps based on Weber numbers for different liquid-liquid systems do not match well. Weber number multiplied by Ohnesorge number is proposed as a parameter to generalize flow maps. Flow maps based on this parameter are superposed well for all liquid-liquid systems of this work and other experiments. Plug length and velocity are measured for the plug flow regime. When dispersed liquid wets channel walls plug length cannot be predicted by known empirical correlations. By means of particle tracking velocimetry technique instantaneous velocity fields in a plug flow regime were measured. Flow circulation inside plug was calculated using velocity data that can be useful for mass flux prediction in chemical reactions.

Keywords: flow patterns, hydrodynamics, liquid-liquid flow, microchannel

Procedia PDF Downloads 398
632 Ant Lion Optimization in a Fuzzy System for Benchmark Control Problem

Authors: Leticia Cervantes, Edith Garcia, Oscar Castillo

Abstract:

At today, there are several control problems where the main objective is to obtain the best control in the study to decrease the error in the application. Many techniques can use to control these problems such as Neural Networks, PID control, Fuzzy Logic, Optimization techniques and many more. In this case, fuzzy logic with fuzzy system and an optimization technique are used to control the case of study. In this case, Ant Lion Optimization is used to optimize a fuzzy system to control the velocity of a simple treadmill. The main objective is to achieve the control of the velocity in the control problem using the ALO optimization. First, a simple fuzzy system was used to control the velocity of the treadmill it has two inputs (error and error change) and one output (desired speed), then results were obtained but to decrease the error the ALO optimization was developed to optimize the fuzzy system of the treadmill. Having the optimization, the simulation was performed, and results can prove that using the ALO optimization the control of the velocity was better than a conventional fuzzy system. This paper describes some basic concepts to help to understand the idea in this work, the methodology of the investigation (control problem, fuzzy system design, optimization), the results are presented and the optimization is used for the fuzzy system. A comparison between the simple fuzzy system and the optimized fuzzy systems are presented where it can be proving the optimization improved the control with good results the major findings of the study is that ALO optimization is a good alternative to improve the control because it helped to decrease the error in control applications even using any control technique to optimized, As a final statement is important to mentioned that the selected methodology was good because the control of the treadmill was improve using the optimization technique.

Keywords: ant lion optimization, control problem, fuzzy control, fuzzy system

Procedia PDF Downloads 403
631 Advancing Our Understanding of Age-Related Changes in Executive Functions: Insights from Neuroimaging, Genetics and Cognitive Neurosciences

Authors: Yasaman Mohammadi

Abstract:

Executive functions are a critical component of goal-directed behavior, encompassing a diverse set of cognitive processes such as working memory, cognitive flexibility, and inhibitory control. These functions are known to decline with age, but the precise mechanisms underlying this decline remain unclear. This paper provides an in-depth review of recent research investigating age-related changes in executive functions, drawing on insights from neuroimaging, genetics, and cognitive neuroscience. Through an interdisciplinary approach, this paper offers a nuanced understanding of the complex interplay between neural mechanisms, genetic factors, and cognitive processes that contribute to executive function decline in aging. Here, we investigate how different neuroimaging methods, like functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), have helped scientists better understand the brain bases for age-related declines in executive function. Additionally, we discuss the role of genetic factors in mediating individual differences in executive functions across the lifespan, as well as the potential for cognitive interventions to mitigate age-related decline. Overall, this paper presents a comprehensive and integrative view of the current state of knowledge regarding age-related changes in executive functions. It underscores the need for continued interdisciplinary research to fully understand the complex and dynamic nature of executive function decline in aging, with the ultimate goal of developing effective interventions to promote healthy cognitive aging.

Keywords: executive functions, aging, neuroimaging, cognitive neuroscience, working memory, cognitive training

Procedia PDF Downloads 74
630 Stoa: Urban Community-Building Social Experiment through Mixed Reality Game Environment

Authors: Radek Richtr, Petr Pauš

Abstract:

Social media nowadays connects people more tightly and intensively than ever, but simultaneously, some sort of social distance, incomprehension, lost of social integrity appears. People can be strongly connected to the person on the other side of the world but unaware of neighbours in the same district or street. The Stoa is a type of application from the ”serious games” genre- it is research augmented reality experiment masked as a gaming environment. In the Stoa environment, the player can plant and grow virtual (organic) structure, a Pillar, that represent the whole suburb. Everybody has their own idea of what is an acceptable, admirable or harmful visual intervention in the area they live in; the purpose of this research experiment is to find and/or define residents shared subconscious spirit, genius loci of the Pillars vicinity, where residents live in. The appearance and evolution of Stoa’s Pillars reflect the real world as perceived by not only the creator but also by other residents/players, who, with their actions, refine the environment. Squares, parks, patios and streets get their living avatar depictions; investors and urban planners obtain information on the occurrence and level of motivation for reshaping the public space. As the project is in product conceptual design phase, the function is one of its most important factors. Function-based modelling makes design problem modular and structured and thus decompose it into sub-functions or function-cells. Paper discuss the current conceptual model for Stoa project, the using of different organic structure textures and models, user interface design, UX study and project’s developing to the final state.

Keywords: augmented reality, urban computing, interaction design, mixed reality, social engineering

Procedia PDF Downloads 232
629 3D Geological Modeling and Engineering Geological Characterization of Shallow Subsurface Soil and Rock of Addis Ababa, Ethiopia

Authors: Biruk Wolde, Atalay Ayele, Yonatan Garkabo, Trufat Hailmariam, Zemenu Germewu

Abstract:

A comprehensive three-dimensional (3D) geological modeling and engineering geological characterization of shallow subsurface soils and rocks are essential for a wide range of geotechnical and seismological engineering applications, particularly in urban environments. The spatial distribution and geological variation of the shallow subsurface of Addis Ababa city have not been studied so far in terms of geological and geotechnical modeling. This study aims at the construction of a 3D geological model, as well as provides awareness into the engineering geological characteristics of shallow subsurface soil and rock of Addis Ababa city. The 3D geological model was constructed by using more than 1500 geotechnical boreholes, well-drilling data, and geological maps. A well-known geostatistical kriging 3D interpolation algorithm was applied to visualize the spatial distribution and geological variation of the shallow subsurface. Due to the complex nature of geological formations, vertical and lateral variation of the geological profiles horizons-solid command has been selected via the Groundwater Modelling System (GMS) graphical user interface software. For the engineering geological characterization of typical soils and rocks, both index and engineering laboratory tests have been used. The geotechnical properties of soil and rocks vary from place to place due to the uneven nature of subsurface formations observed in the study areas. The constructed model ascertains the thickness, extent, and 3D distribution of the important geological units of the city. This study is the first comprehensive research work on 3D geological modeling and subsurface characterization of soils and rocks in Addis Ababa city, and the outcomes will be important for further future research on subsurface conditions in the city. Furthermore, these findings provide a reference for developing a geo-database for the city.

Keywords: 3d geological modeling, addis ababa, engineering geology, geostatistics, horizons-solid

Procedia PDF Downloads 107
628 Iterative Segmentation and Application of Hausdorff Dilation Distance in Defect Detection

Authors: S. Shankar Bharathi

Abstract:

Inspection of surface defects on metallic components has always been challenging due to its specular property. Occurrences of defects such as scratches, rust, pitting are very common in metallic surfaces during the manufacturing process. These defects if unchecked can hamper the performance and reduce the life time of such component. Many of the conventional image processing algorithms in detecting the surface defects generally involve segmentation techniques, based on thresholding, edge detection, watershed segmentation and textural segmentation. They later employ other suitable algorithms based on morphology, region growing, shape analysis, neural networks for classification purpose. In this paper the work has been focused only towards detecting scratches. Global and other thresholding techniques were used to extract the defects, but it proved to be inaccurate in extracting the defects alone. However, this paper does not focus on comparison of different segmentation techniques, but rather describes a novel approach towards segmentation combined with hausdorff dilation distance. The proposed algorithm is based on the distribution of the intensity levels, that is, whether a certain gray level is concentrated or evenly distributed. The algorithm is based on extraction of such concentrated pixels. Defective images showed higher level of concentration of some gray level, whereas in non-defective image, there seemed to be no concentration, but were evenly distributed. This formed the basis in detecting the defects in the proposed algorithm. Hausdorff dilation distance based on mathematical morphology was used to strengthen the segmentation of the defects.

Keywords: metallic surface, scratches, segmentation, hausdorff dilation distance, machine vision

Procedia PDF Downloads 432
627 An Integrated Framework for Seismic Risk Mitigation Decision Making

Authors: Mojtaba Sadeghi, Farshid Baniassadi, Hamed Kashani

Abstract:

One of the challenging issues faced by seismic retrofitting consultants and employers is quick decision-making on the demolition or retrofitting of a structure at the current time or in the future. For this reason, the existing models proposed by researchers have only covered one of the aspects of cost, execution method, and structural vulnerability. Given the effect of each factor on the final decision, it is crucial to devise a new comprehensive model capable of simultaneously covering all the factors. This study attempted to provide an integrated framework that can be utilized to select the most appropriate earthquake risk mitigation solution for buildings. This framework can overcome the limitations of current models by taking into account several factors such as cost, execution method, risk-taking and structural failure. In the newly proposed model, the database and essential information about retrofitting projects are developed based on the historical data on a retrofit project. In the next phase, an analysis is conducted in order to assess the vulnerability of the building under study. Then, artificial neural networks technique is employed to calculate the cost of retrofitting. While calculating the current price of the structure, an economic analysis is conducted to compare demolition versus retrofitting costs. At the next stage, the optimal method is identified. Finally, the implementation of the framework was demonstrated by collecting data concerning 155 previous projects.

Keywords: decision making, demolition, construction management, seismic retrofit

Procedia PDF Downloads 241
626 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution

Authors: Pitigalage Chamath Chandira Peiris

Abstract:

A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.

Keywords: single image super resolution, computer vision, vision transformers, image restoration

Procedia PDF Downloads 108
625 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 283
624 Modeling and Design of E-mode GaN High Electron Mobility Transistors

Authors: Samson Mil'shtein, Dhawal Asthana, Benjamin Sullivan

Abstract:

The wide energy gap of GaN is the major parameter justifying the design and fabrication of high-power electronic components made of this material. However, the existence of a piezo-electrics in nature sheet charge at the AlGaN/GaN interface complicates the control of carrier injection into the intrinsic channel of GaN HEMTs (High Electron Mobility Transistors). As a result, most of the transistors created as R&D prototypes and all of the designs used for mass production are D-mode devices which introduce challenges in the design of integrated circuits. This research presents the design and modeling of an E-mode GaN HEMT with a very low turn-on voltage. The proposed device includes two critical elements allowing the transistor to achieve zero conductance across the channel when Vg = 0V. This is accomplished through the inclusion of an extremely thin, 2.5nm intrinsic Ga₀.₇₄Al₀.₂₆N spacer layer. The added spacer layer does not create piezoelectric strain but rather elastically follows the variations of the crystal structure of the adjacent GaN channel. The second important factor is the design of a gate metal with a high work function. The use of a metal gate with a work function (Ni in this research) greater than 5.3eV positioned on top of n-type doped (Nd=10¹⁷cm⁻³) Ga₀.₇₄Al₀.₂₆N creates the necessary built-in potential, which controls the injection of electrons into the intrinsic channel as the gate voltage is increased. The 5µm long transistor with a 0.18µm long gate and a channel width of 30µm operate at Vd=10V. At Vg =1V, the device reaches the maximum drain current of 0.6mA, which indicates a high current density. The presented device is operational at frequencies greater than 10GHz and exhibits a stable transconductance over the full range of operational gate voltages.

Keywords: compound semiconductors, device modeling, enhancement mode HEMT, gallium nitride

Procedia PDF Downloads 265
623 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 205
622 Brain-Computer Interface System for Lower Extremity Rehabilitation of Chronic Stroke Patients

Authors: Marc Sebastián-Romagosa, Woosang Cho, Rupert Ortner, Christy Li, Christoph Guger

Abstract:

Neurorehabilitation based on Brain-Computer Interfaces (BCIs) shows important rehabilitation effects for patients after stroke. Previous studies have shown improvements for patients that are in a chronic stage and/or have severe hemiparesis and are particularly challenging for conventional rehabilitation techniques. For this publication, seven stroke patients in the chronic phase with hemiparesis in the lower extremity were recruited. All of them participated in 25 BCI sessions about 3 times a week. The BCI system was based on the Motor Imagery (MI) of the paretic ankle dorsiflexion and healthy wrist dorsiflexion with Functional Electrical Stimulation (FES) and avatar feedback. Assessments were conducted to assess the changes in motor improvement before, after and during the rehabilitation training. Our primary measures used for the assessment were the 10-meters walking test (10MWT), Range of Motion (ROM) of the ankle dorsiflexion and Timed Up and Go (TUG). Results show a significant increase in the gait speed in the primary measure 10MWT fast velocity of 0.18 m/s IQR = [0.12 to 0.2], P = 0.016. The speed in the TUG was also significantly increased by 0.1 m/s IQR = [0.09 to 0.11], P = 0.031. The active ROM assessment increased 4.65º, and IQR = [ 1.67 - 7.4], after rehabilitation training, P = 0.029. These functional improvements persisted at least one month after the end of the therapy. These outcomes show the feasibility of this BCI approach for chronic stroke patients and further support the growing consensus that these types of tools might develop into a new paradigm for rehabilitation tools for stroke patients. However, the results are from only seven chronic stroke patients, so the authors believe that this approach should be further validated in broader randomized controlled studies involving more patients. MI and FES-based non-invasive BCIs are showing improvement in the gait rehabilitation of patients in the chronic stage after stroke. This could have an impact on the rehabilitation techniques used for these patients, especially when they are severely impaired and their mobility is limited.

Keywords: neuroscience, brain computer interfaces, rehabilitat, stroke

Procedia PDF Downloads 94
621 Text Localization in Fixed-Layout Documents Using Convolutional Networks in a Coarse-to-Fine Manner

Authors: Beier Zhu, Rui Zhang, Qi Song

Abstract:

Text contained within fixed-layout documents can be of great semantic value and so requires a high localization accuracy, such as ID cards, invoices, cheques, and passports. Recently, algorithms based on deep convolutional networks achieve high performance on text detection tasks. However, for text localization in fixed-layout documents, such algorithms detect word bounding boxes individually, which ignores the layout information. This paper presents a novel architecture built on convolutional neural networks (CNNs). A global text localization network and a regional bounding-box regression network are introduced to tackle the problem in a coarse-to-fine manner. The text localization network simultaneously locates word bounding points, which takes the layout information into account. The bounding-box regression network inputs the features pooled from arbitrarily sized RoIs and refine the localizations. These two networks share their convolutional features and are trained jointly. A typical type of fixed-layout documents: ID cards, is selected to evaluate the effectiveness of the proposed system. These networks are trained on data cropped from nature scene images, and synthetic data produced by a synthetic text generation engine. Experiments show that our approach locates high accuracy word bounding boxes and achieves state-of-the-art performance.

Keywords: bounding box regression, convolutional networks, fixed-layout documents, text localization

Procedia PDF Downloads 200
620 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru

Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar

Abstract:

Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.

Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit

Procedia PDF Downloads 148
619 Software Development to Empowering Digital Libraries with Effortless Digital Cataloging and Access

Authors: Abdul Basit Kiani

Abstract:

The software for the digital library system is a cutting-edge solution designed to revolutionize the way libraries manage and provide access to their vast collections of digital content. This advanced software leverages the power of technology to offer a seamless and user-friendly experience for both library staff and patrons. By implementing this software, libraries can efficiently organize, store, and retrieve digital resources, including e-books, audiobooks, journals, articles, and multimedia content. Its intuitive interface allows library staff to effortlessly manage cataloging, metadata extraction, and content enrichment, ensuring accurate and comprehensive access to digital materials. For patrons, the software offers a personalized and immersive digital library experience. They can easily browse the digital catalog, search for specific items, and explore related content through intelligent recommendation algorithms. The software also facilitates seamless borrowing, lending, and preservation of digital items, enabling users to access their favorite resources anytime, anywhere, on multiple devices. With robust security features, the software ensures the protection of intellectual property rights and enforces access controls to safeguard sensitive content. Integration with external authentication systems and user management tools streamlines the library's administration processes, while advanced analytics provide valuable insights into patron behavior and content usage. Overall, this software for the digital library system empowers libraries to embrace the digital era, offering enhanced access, convenience, and discoverability of their vast collections. It paves the way for a more inclusive and engaging library experience, catering to the evolving needs of tech-savvy patrons.

Keywords: software development, empowering digital libraries, digital cataloging and access, management system

Procedia PDF Downloads 87