Search results for: automatic image colorization
1236 An Attentional Bi-Stream Sequence Learner (AttBiSeL) for Credit Card Fraud Detection
Authors: Amir Shahab Shahabi, Mohsen Hasirian
Abstract:
Modern societies, marked by expansive Internet connectivity and the rise of e-commerce, are now integrated with digital platforms at an unprecedented level. The efficiency, speed, and accessibility of e-commerce have garnered a substantial consumer base. Against this backdrop, electronic banking has undergone rapid proliferation within the realm of online activities. However, this growth has inadvertently given rise to an environment conducive to illicit activities, notably electronic payment fraud, posing a formidable challenge to the domain of electronic banking. A pivotal role in upholding the integrity of electronic commerce and business transactions is played by electronic fraud detection, particularly in the context of credit cards which underscores the imperative of comprehensive research in this field. To this end, our study introduces an Attentional Bi-Stream Sequence Learner (AttBiSeL) framework that leverages attention mechanisms and recurrent networks. By incorporating bidirectional recurrent layers, specifically bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) layers, the proposed model adeptly extracts past and future transaction sequences while accounting for the temporal flow of information in both directions. Moreover, the integration of an attention mechanism accentuates specific transactions to varying degrees, as manifested in the output of the recurrent networks. The effectiveness of the proposed approach in automatic credit card fraud classification is evaluated on the European Cardholders' Fraud Dataset. Empirical results validate that the hybrid architectural paradigm presented in this study yields enhanced accuracy compared to previous studies.Keywords: credit card fraud, deep learning, attention mechanism, recurrent neural networks
Procedia PDF Downloads 131235 Sliver Nanoparticles Enhanced Visible and Near Infrared Emission of Er³+ Ions Doped Lithium Tungsten Tellurite Glasses
Authors: Sachin Mahajan, Ghizal Ansari
Abstract:
TeO2-WO3-Li2O glass doped erbium ions (1mol %) and embedded silver nanoparticles( Ag NPs) has successfully been prepared by melt quenching technique and increasing the heat-treatment duration. The amorphous nature of the glass is determined by X-ray diffraction method, and the presences of silver nanoparticles are confirmed using Transmission Electron Microscopy analysis. TEM image reveals that the Ag NPs are dispersed homogeneously with average size 18 nm. From the UV-Vis absorption spectra, the surface plasmon resonance (SPR) peaks are detected at 550 and 578 nm. Under 980 nm excitation wavelengths, enhancement of red upconversion fluorescence and near-infrared broadband emission around 1550nm of Er3+ ions doped tellurite glasses containing Ag NPs have been observed. The observed enhancement of Er3+ emission is mainly attributed to the local field effects of Ag NPs causes an intensified electromagnetic field around NPs. For observed enhancement involved mechanisms are discussed.Keywords: erbium ions, silver nanoparticle, surface plasmon resonance, upconversion emission
Procedia PDF Downloads 5901234 Assisting Dating of Greek Papyri Images with Deep Learning
Authors: Asimina Paparrigopoulou, John Pavlopoulos, Maria Konstantinidou
Abstract:
Dating papyri accurately is crucial not only to editing their texts but also for our understanding of palaeography and the history of writing, ancient scholarship, material culture, networks in antiquity, etc. Most ancient manuscripts offer little evidence regarding the time of their production, forcing papyrologists to date them on palaeographical grounds, a method often criticized for its subjectivity. By experimenting with data obtained from the Collaborative Database of Dateable Greek Bookhands and the PapPal online collections of objectively dated Greek papyri, this study shows that deep learning dating models, pre-trained on generic images, can achieve accurate chronological estimates for a test subset (67,97% accuracy for book hands and 55,25% for documents). To compare the estimates of these models with those of humans, experts were asked to complete a questionnaire with samples of literary and documentary hands that had to be sorted chronologically by century. The same samples were dated by the models in question. The results are presented and analysed.Keywords: image classification, papyri images, dating
Procedia PDF Downloads 781233 Manufacturing the Authenticity of Dokkaebi’s Visual Representation in Tourist Marketing
Authors: Mikyung Bak
Abstract:
The dokkaebi, a beloved icon of Korean culture, is represented as an elf, goblin, monster, dwarf, or any similar creature in different media, such as animated shows, comics, soap operas, and movies. It is often described as a mythical creature with a horn or horns and long teeth, wearing tiger-skin pants or a grass skirt, and carrying a magic stick. Many Korean researchers agree on the similarity of the image of the Korean dokkaebi with that of the Japanese oni, a view that is regard as negative from an anti-colonial or nationalistic standpoint. They cite such similarity between the two mythical creatures as evidence that Japanese colonialism persists in Korea. The debate on the originality of dokkaebi’s visual representation is an issue that must be addressed urgently. This research demonstrates through a diagram the plurality of interpretations of dokkaebi’s visual representations in what are considered ‘authentic’ images of dokkaebi in Korean art and culture. This diagram presents the opinions of four major groups in the debate, namely, the scholars of Korean literature and folklore, art historians, authors, and artists. It also shows the creation of new dokkaebi visual representations in popular media, including those influenced by the debate. The diagram further proves that dokkaebi’s representations varied, which include the typical persons or invisible characters found in Korean literature, original Korean folk characters in traditional art, and even universal spirit characters. They are also visually represented by completely new creatures as well as oni-based mythical beings and the actual oni itself. The earlier dokkaebi representations were driven by the creation of a national ideology or national cultural paradigm and, thus, were more uniform and protected. In contrast, the more recent representations are influenced by the Korean industrial strategy of ‘cultural economics,’ which is concerned with the international rather than the domestic market. This recent Korean cultural strategy emphasizes diversity and commonality with the global culture rather than originality and locality. It employs traditional cultural resources to construct a global image. Consequently, dokkaebi’s recent representations have become more common and diverse, thereby incorporating even oni’s characteristics. This argument has rendered the grounds of the debate irrelevant. The dokkaebi has been used recently for tourist marketing purposes, particularly in revitalizing interest in regions considered the cradle of various traditional dokkaebi tales. These campaign strategies include the Jeju-do Dokkaebi Park, Koksung Dokkaebi Land, as well as the Taebaek and Sokri-san Dokkaebi Festivals. Almost dokkaebi characters are identical to the Japanese oni in tourist marketing. However, the pursuit for dokkaebi’s authentic visual representation is less interesting and fruitful than the appreciation of the entire spectrum of dokkaebi images that have been created. Thus, scholars and stakeholders must not exclude the possibilities for a variety of potentials within the visual culture. The same sentiment applies to traditional art and craft. This study aims to contribute to a new visualization of the dokkaebi that embraces the possibilities of both folk craft and art, which continue to be uncovered by diverse and careful researchers in a still-developing field.Keywords: Dokkaebi, post-colonial period, representation, tourist marketing
Procedia PDF Downloads 2781232 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks
Authors: Andrew N. Saylor, James R. Peters
Abstract:
Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging
Procedia PDF Downloads 1291231 Face Tracking and Recognition Using Deep Learning Approach
Authors: Degale Desta, Cheng Jian
Abstract:
The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions.Keywords: deep learning, face recognition, identification, fast-RCNN
Procedia PDF Downloads 1401230 Development of an Instrument: The Contemporary Adolescent Well-Being Scale (CAWBS)
Authors: Camille Rault, Mark Bahr
Abstract:
The aim of the present study was to develop a contemporaneous instrument measuring adolescent’s subjective well-being (SWB). The instrument development underwent a three-phase pilot study. Phase one (N = 31) used a qualitative approach to generate domains of SWB relevant to adolescents. During the second phase (N = 22), items were tested targeting these domains. Finally, the third phase (N = 22) assisted in addition, deletion and refinement according to the first two phases of the pilot. A total of 49 items were retained for the final version of the instrument. The Contemporary Adolescent Well-Being Scale (CAWBS) was administered to 1071 school children (599 girls) aged between ten to 18 years old (M = 14,70; SD = 1.45) from Queensland, Australia. Results confirmed the seven-factor construct hypothesized and explained 45% of the variance. The questionnaire pertained to seven domains of adolescent’s SWB, namely; Overall life satisfaction; Bullying; Body image; Social connectedness; Activities; Control appraisal; and Negative feelings. Reliability was shown to be acceptable with Cronbach’s alpha ranging from .58 to .89. Future research should refine the CAWBS and investigate the psychometric properties of this instrument.Keywords: adolescence, construct validity, instrument, subjective well-being
Procedia PDF Downloads 2691229 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 421228 Non-Invasive Imaging of Human Tissue Using NIR Light
Authors: Ashwani Kumar
Abstract:
Use of NIR light for imaging the biological tissue and to quantify its optical properties is a good choice over other invasive methods. Optical tomography involves two steps. One is the forward problem and the other is the reconstruction problem. The forward problem consists of finding the measurements of transmitted light through the tissue from source to detector, given the spatial distribution of absorption and scattering properties. The second step is the reconstruction problem. In X-ray tomography, there is standard method for reconstruction called filtered back projection method or the algebraic reconstruction methods. But this method cannot be applied as such, in optical tomography due to highly scattering nature of biological tissue. A hybrid algorithm for reconstruction has been implemented in this work which takes into account the highly scattered path taken by photons while back projecting the forward data obtained during Monte Carlo simulation. The reconstructed image suffers from blurring due to point spread function.Keywords: NIR light, tissue, blurring, Monte Carlo simulation
Procedia PDF Downloads 4931227 The Dose to Organs in Lumbar-Abdominal Computed Tomography Imaging Using TLD
Authors: M. Zehtabian, Z. Molaiemanesh, Z. Shafahi, M. Papie, M. Zahraie Moghaddam, M. Mehralizadeh, M. R. Vahidi, S. Sina
Abstract:
The introduction of CT scans has been a great improvement in diagnosis of different diseases. However, this imaging modality can expose the patients to cumulative radiation doses which may increase the risks of some health problems like cancer. In this study, the dose delivered to different organs in lumbar-abdominal imaging was measured by putting the TLD-100, and TLD-100H chips inside the Alderson Rando phantom. The lumbar-abdominal image of the phantom was obtained, while TLD chips were inside the holes of the phantom. According to the results obtained in this study using TLD-100 chips, the average dose received by liver, bladder, rectum, kidneys, and uterus were found to be 12.9 mSv, 8.9 mSv, 10.1 mSv, 11.0 mSv, 11.2 mSv, and 10.5 mSv respectively, while the measurements performed by TLD-100H show that the average dose to liver, bladder, rectum, kidneys, and uterus were found to be 12.4 mSv, 9.2 mSv, 9.5 mSv, 10.5 mSv, 10.7 mSv, and 9.9 mSv respectively. The results of this study indicates that the dose measured by the TLD-100H chips are in close agreement with those obtained by TLD-100.Keywords: CT scan, dose, TLD-100, diagnosis
Procedia PDF Downloads 6361226 Triadic Relationship of Icon Design for Semi-Literate Communities
Authors: Peng-Hui Maffee Wan, Klarissa Ting Ting Chang, Rax Suen Chun Lung
Abstract:
Icons, or pictorial and graphical objects, are commonly used in Human-Computer Interaction (HCI) fields as the mediator in order to communicate information to users. Yet there has been little studies focusing on a majority of the world’s population, semi-literate communities, in terms of the fundamental know-how for designing icons for such population. In this study, two sets of icons belonging in different icon taxonomy, abstract and concrete are designed for a mobile application for semi-literate agricultural communities. In this paper, we propose a triadic relationship of an icon, namely meaning, task and mental image, which inherits the triadic relationship of a sign. User testing with the application and a post-pilot questionnaire are conducted as the experimental approach in two rural villages in India. Icons belonging to concrete taxonomy perform better than abstract icons on the premise that the design of the icon fulfills the underlying rules of the proposed triadic relationship.Keywords: icon, GUI, mobile app, semi-literate
Procedia PDF Downloads 4891225 Low Cost Technique for Measuring Luminance in Biological Systems
Abstract:
In this work, the relationship between the melanin content in a tissue and subsequent absorption of light through that tissue was determined using a digital camera. This technique proved to be simple, cost effective, efficient and reliable. Tissue phantom samples were created using milk and soy sauce to simulate the optical properties of melanin content in human tissue. Increasing the concentration of soy sauce in the milk correlated to an increase in melanin content of an individual. Two methods were employed to measure the light transmitted through the sample. The first was direct measurement of the transmitted intensity using a conventional lux meter. The second method involved correctly calibrating an ordinary digital camera and using image analysis software to calculate the transmitted intensity through the phantom. The results from these methods were then graphically compared to the theoretical relationship between the intensity of transmitted light and the concentration of absorbers in the sample. Conclusions were then drawn about the effectiveness and efficiency of these low cost methods.Keywords: tissue phantoms, scattering coefficient, albedo, low-cost method
Procedia PDF Downloads 2711224 Legal Pluralism and Ideology: The Recognition of the Indigenous Justice Administration in Bolivia through the "Indigenismo" and "Decolonisation" Discourses
Authors: Adriana Pereira Arteaga
Abstract:
In many Latin American countries the transition towards legal pluralism - has developed as part of what is called Latin-American-Constitutionalism over the last thirty years. The aim of this paper is to discuss how legal pluralism in its current form in Bolivia may produce exclusion and violence. Legal sources and discourse analysis - as an approach to examine written language on discourse documentation- will be used to develop this paper. With the constitution of 2009, Bolivia was symbolically "re-founded" into a multi-nation state. This shift goes hand in hand with the "indigenista" and "decolonisation" ideologies developing since the early 20th century. Discourses based on these ideologies reflect the rejection of liberal and western premises on which the Bolivian republic was originally built after independence. According to the "indigenista" movements, the liberal nation-state generates institutions corresponding to a homogenous society. These liberal institutions not only ignore the Bolivian multi-nation reality, but also maintain the social structures originating form the colony times, based on prejudices against the indigenous. The described statements were elaborated through the image: the indigenous people humiliated by a cruel western system as highlighted by the constitution's preamble. This narrative had a considerable impact on the sensitivity of people and received great social support. Therefore the proposal for changing structures of the nation-state, is charged with an emancipatory message of restoring even the pre-Columbian order. An order at times romantically described as the perfect order. Legally this connotes a rejection of the positivistic national legal system based on individual rights and the promotion of constitutional recognition of indigenous justice administration. The pluralistic Constitution is supposed to promote tolerance and a peaceful coexistence among nations, so that the unity and integrity of the country could be maintained. In its current form, legal pluralism in Bolivia is justified on pre-existing rights contained for example in the International - Labour - Organization - Convention 169, but it is more developed on the described discursive constructions. Over time these discursive constructions created inconsistencies in terms of putting indigenous justice administration into practice: First, because legal pluralism has been more developed on level of political discourse, so a real interaction between the national and the indigenous jurisdiction cannot be observed. There are no clear coordination and cooperation mechanisms. Second, since the recently reformed constitution is based on deep sensitive experiences, little is said about the general legal principles on which a pluralistic administration of justice in Bolivia should be based. Third, basic rights, liberties, and constitutional guarantees are also affected by the antagonized image of the national justice administration. As a result, fundamental rights could be violated on a large scale because many indigenous justice administration practices run counter to these constitutional rules. These problems are not merely Bolivian but may also be encountered in other regional countries with similar backgrounds, like Ecuador.Keywords: discourse, indigenous justice, legal pluralism, multi-nation
Procedia PDF Downloads 4451223 A Prototype of an Information and Communication Technology Based Intervention Tool for Children with Dyslexia
Authors: Rajlakshmi Guha, Sajjad Ansari, Shazia Nasreen, Hirak Banerjee, Jiaul Paik
Abstract:
Dyslexia is a neurocognitive disorder, affecting around fifteen percent of the Indian population. The symptoms include difficulty in reading alphabet, words, and sentences. This can be difficult at the phonemic or recognition level and may further affect lexical structures. Therapeutic intervention of dyslexic children post assessment is generally done by special educators and psychologists through one on one interaction. Considering the large number of children affected and the scarcity of experts, access to care is limited in India. Moreover, unavailability of resources and timely communication with caregivers add on to the problem of proper intervention. With the development of Educational Technology and its use in India, access to information and care has been improved in such a large and diverse country. In this context, this paper proposes an ICT enabled home-based intervention program for dyslexic children which would support the child, and provide an interactive interface between expert, parents, and students. The paper discusses the details of the database design and system layout of the program. Along with, it also highlights the development of different technical aids required to build out personalized android applications for the Indian dyslexic population. These technical aids include speech database creation for children, automatic speech recognition system, serious game development, and color coded fonts. The paper also emphasizes the games developed to assist the dyslexic child on cognitive training primarily for attention, working memory, and spatial reasoning. In addition, it talks about the specific elements of the interactive intervention tool that makes it effective for home based intervention of dyslexia.Keywords: Android applications, cognitive training, dyslexia, intervention
Procedia PDF Downloads 2911222 Study on 3D FE Analysis on Normal and Osteoporosis Mouse Models Based on 3-Point Bending Tests
Authors: Tae-min Byun, Chang-soo Chon, Dong-hyun Seo, Han-sung Kim, Bum-mo Ahn, Hui-suk Yun, Cheolwoong Ko
Abstract:
In this study, a 3-point bending computational analysis of normal and osteoporosis mouse models was performed based on the Micro-CT image information of the femurs. The finite element analysis (FEA) found 1.68 N (normal group) and 1.39 N (osteoporosis group) in the average maximum force, and 4.32 N/mm (normal group) and 3.56 N/mm (osteoporosis group) in the average stiffness. In the comparison of the 3-point bending test results, the maximum force and the stiffness were different about 9.4 times in the normal group and about 11.2 times in the osteoporosis group. The difference between the analysis and the test was greatly significant and this result demonstrated improvement points of the material properties applied to the computational analysis of this study. For the next study, the material properties of the mouse femur will be supplemented through additional computational analysis and test.Keywords: 3-point bending test, mouse, osteoporosis, FEA
Procedia PDF Downloads 3511221 Homogenization of Cocoa Beans Fermentation to Upgrade Quality Using an Original Improved Fermenter
Authors: Aka S. Koffi, N’Goran Yao, Philippe Bastide, Denis Bruneau, Diby Kadjo
Abstract:
Cocoa beans (Theobroma cocoa L.) are the main components for chocolate manufacturing. The beans must be correctly fermented at first. Traditional process to perform the first fermentation (lactic fermentation) often consists in confining cacao beans using banana leaves or a fermentation basket, both of them leading to a poor product thermal insulation and to an inability to mix the product. Box fermenter reduces this loss by using a wood with large thickness (e>3cm), but mixing to homogenize the product is still hard to perform. Automatic fermenters are not rentable for most of producers. Heat (T>45°C) and acidity produced during the fermentation by microbiology activity of yeasts and bacteria are enabling the emergence of potential flavor and taste of future chocolate. In this study, a cylindro-rotative fermenter (FCR-V1) has been built and coconut fibers were used in its structure to confine heat. An axis of rotation (360°) has been integrated to facilitate the turning and homogenization of beans in the fermenter. This axis permits to put fermenter in a vertical position during the anaerobic alcoholic phase of fermentation, and horizontally during acetic phase to take advantage of the mid height filling. For circulation of air flow during turning in acetic phase, two woven rattan with grid have been made, one for the top and second for the bottom of the fermenter. In order to reduce air flow during acetic phase, two airtight covers are put on each grid cover. The efficiency of the turning by this kind of rotation, coupled with homogenization of the temperature, caused by the horizontal position in the acetic phase of the fermenter, contribute to having a good proportion of well-fermented beans (83.23%). In addition, beans’pH values ranged between 4.5 and 5.5. These values are ideal for enzymatic activity in the production of the aromatic compounds inside beans. The regularity of mass loss during all fermentation makes it possible to predict the drying surface corresponding to the amount being fermented.Keywords: cocoa fermentation, fermenter, microbial activity, temperature, turning
Procedia PDF Downloads 2611220 Analysis of Wall Deformation of the Arterial Plaque Models: Effects of Viscoelasticity
Authors: Eun Kyung Kim, Kyehan Rhee
Abstract:
Viscoelastic wall properties of the arterial plaques change as the disease progresses, and estimation of wall viscoelasticity can provide a valuable assessment tool for plaque rupture prediction. Cross section of the stenotic coronary artery was modeled based on the IVUS image, and the finite element analysis was performed to get wall deformation under pulsatile pressure. The effects of viscoelastic parameters of the plaque on luminal diameter variations were explored. The result showed that decrease of viscous effect reduced the phase angle between the pressure and displacement waveforms, and phase angle was dependent on the viscoelastic properties of the wall. Because viscous effect of tissue components could be identified using the phase angle difference, wall deformation waveform analysis may be applied to predict plaque wall composition change and vascular wall disease progression.Keywords: atherosclerotic plaque, diameter variation, finite element method, viscoelasticity
Procedia PDF Downloads 2151219 Regression Model Evaluation on Depth Camera Data for Gaze Estimation
Authors: James Purnama, Riri Fitri Sari
Abstract:
We investigate the machine learning algorithm selection problem in the term of a depth image based eye gaze estimation, with respect to its essential difficulty in reducing the number of required training samples and duration time of training. Statistics based prediction accuracy are increasingly used to assess and evaluate prediction or estimation in gaze estimation. This article evaluates Root Mean Squared Error (RMSE) and R-Squared statistical analysis to assess machine learning methods on depth camera data for gaze estimation. There are 4 machines learning methods have been evaluated: Random Forest Regression, Regression Tree, Support Vector Machine (SVM), and Linear Regression. The experiment results show that the Random Forest Regression has the lowest RMSE and the highest R-Squared, which means that it is the best among other methods.Keywords: gaze estimation, gaze tracking, eye tracking, kinect, regression model, orange python
Procedia PDF Downloads 5381218 The Impact of HRM Practices and Brand Performance on Financial Institution Performance: An Empirical Study
Authors: M. Khasro Miah, Chowdhury Hossan Golam, Muhammed Siddique Hossain
Abstract:
Recently, financial institution brand image is turning out to be pretty weak due to the presence of strong local competitors and this in term is affecting their firm performance also. In this study, four major HR practices, namely employee commitment, empowerment, loyalty, and engagement are considered in order to measure its effects on the brand and financial performance of banking organization. This study finds that the banking institutions of Bangladesh are more customer oriented rather than internal employee oriented, which makes it quite obvious that the internal HR practices will have little or no effect on the banks brand performance. Employee Commitment has emerged out to be the most important predictor, followed by employee loyalty and empowerment. The employees are well-empowered, engaged, and shows loyalty towards the organization, but their activities are not well linked with the brand. Firms should concentrate to create a congenial working atmosphere and employees should feel like a part of the organization.Keywords: HR in bank, employee commitment, empowerment, finance, employee commitment, loyalty and engagement
Procedia PDF Downloads 4821217 Text Based Shuffling Algorithm on Graphics Processing Unit for Digital Watermarking
Authors: Zayar Phyo, Ei Chaw Htoon
Abstract:
In a New-LSB based Steganography method, the Fisher-Yates algorithm is used to permute an existing array randomly. However, that algorithm performance became slower and occurred memory overflow problem while processing the large dimension of images. Therefore, the Text-Based Shuffling algorithm aimed to select only necessary pixels as hiding characters at the specific position of an image according to the length of the input text. In this paper, the enhanced text-based shuffling algorithm is presented with the powered of GPU to improve more excellent performance. The proposed algorithm employs the OpenCL Aparapi framework, along with XORShift Kernel including the Pseudo-Random Number Generator (PRNG) Kernel. PRNG is applied to produce random numbers inside the kernel of OpenCL. The experiment of the proposed algorithm is carried out by practicing GPU that it can perform faster-processing speed and better efficiency without getting the disruption of unnecessary operating system tasks.Keywords: LSB based steganography, Fisher-Yates algorithm, text-based shuffling algorithm, OpenCL, XORShiftKernel
Procedia PDF Downloads 1501216 Automation of Savitsky's Method for Power Calculation of High Speed Vessel and Generating Empirical Formula
Authors: M. Towhidur Rahman, Nasim Zaman Piyas, M. Sadiqul Baree, Shahnewaz Ahmed
Abstract:
The design of high-speed craft has recently become one of the most active areas of naval architecture. Speed increase makes these vehicles more efficient and useful for military, economic or leisure purpose. The planing hull is designed specifically to achieve relatively high speed on the surface of the water. Speed on the water surface is closely related to the size of the vessel and the installed power. The Savitsky method was first presented in 1964 for application to non-monohedric hulls and for application to stepped hulls. This method is well known as a reliable comparative to CFD analysis of hull resistance. A computer program based on Savitsky’s method has been developed using MATLAB. The power of high-speed vessels has been computed in this research. At first, the program reads some principal parameters such as displacement, LCG, Speed, Deadrise angle, inclination of thrust line with respect to keel line etc. and calculates the resistance of the hull using empirical planning equations of Savitsky. However, some functions used in the empirical equations are available only in the graphical form, which is not suitable for the automatic computation. We use digital plotting system to extract data from nomogram. As a result, value of wetted length-beam ratio and trim angle can be determined directly from the input of initial variables, which makes the power calculation automated without manually plotting of secondary variables such as p/b and other coefficients and the regression equations of those functions are derived by using data from different charts. Finally, the trim angle, mean wetted length-beam ratio, frictional coefficient, resistance, and power are computed and compared with the results of Savitsky and good agreement has been observed.Keywords: nomogram, planing hull, principal parameters, regression
Procedia PDF Downloads 4041215 Fabrication of Cellulose Acetate/Polyethylene Glycol Membranes Blended with Silica and Carbon Nanotube for Desalination Process
Authors: Siti Nurkhamidah, Yeni Rahmawati, Fadlilatul Taufany, Eamor M. Woo, I Made P. A. Merta, Deffry D. A. Putra, Pitsyah Alifiyanti, Krisna D. Priambodo
Abstract:
Cellulose acetate/polyethylene glycol (CA/PEG) membrane was modified with varying amount of silica and carbon nanotube (CNT) to enhance its separation performance in the desalination process. These composite membranes were characterized for their hydrophilicity, morphology and permeation properties. The experiment results show that hydrophilicity of CA/PEG/Silica membranes increases with the increasing of silica concentration and the decreasing particle size of silica. From Scanning Electron Microscopy (SEM) image, it shows that pore structure of CA/PEG membranes increases with the addition of silica. Membrane performance analysis shows that permeate flux, salt rejection, and permeability of membranes increase with the increasing of silica concentrations. The effect of CNT on the hydrophylicity, morphology, and permeation properties was also discussed.Keywords: carbon nanotube, cellulose acetate, desalination, membrane, PEG
Procedia PDF Downloads 3201214 Features Vector Selection for the Recognition of the Fragmented Handwritten Numeric Chains
Authors: Salim Ouchtati, Aissa Belmeguenai, Mouldi Bedda
Abstract:
In this study, we propose an offline system for the recognition of the fragmented handwritten numeric chains. Firstly, we realized a recognition system of the isolated handwritten digits, in this part; the study is based mainly on the evaluation of neural network performances, trained with the gradient backpropagation algorithm. The used parameters to form the input vector of the neural network are extracted from the binary images of the isolated handwritten digit by several methods: the distribution sequence, sondes application, the Barr features, and the centered moments of the different projections and profiles. Secondly, the study is extended for the reading of the fragmented handwritten numeric chains constituted of a variable number of digits. The vertical projection was used to segment the numeric chain at isolated digits and every digit (or segment) was presented separately to the entry of the system achieved in the first part (recognition system of the isolated handwritten digits).Keywords: features extraction, handwritten numeric chains, image processing, neural networks
Procedia PDF Downloads 2651213 Tree Species Classification Using Effective Features of Polarimetric SAR and Hyperspectral Images
Authors: Milad Vahidi, Mahmod R. Sahebi, Mehrnoosh Omati, Reza Mohammadi
Abstract:
Forest management organizations need information to perform their work effectively. Remote sensing is an effective method to acquire information from the Earth. Two datasets of remote sensing images were used to classify forested regions. Firstly, all of extractable features from hyperspectral and PolSAR images were extracted. The optical features were spectral indexes related to the chemical, water contents, structural indexes, effective bands and absorption features. Also, PolSAR features were the original data, target decomposition components, and SAR discriminators features. Secondly, the particle swarm optimization (PSO) and the genetic algorithms (GA) were applied to select optimization features. Furthermore, the support vector machine (SVM) classifier was used to classify the image. The results showed that the combination of PSO and SVM had higher overall accuracy than the other cases. This combination provided overall accuracy about 90.56%. The effective features were the spectral index, the bands in shortwave infrared (SWIR) and the visible ranges and certain PolSAR features.Keywords: hyperspectral, PolSAR, feature selection, SVM
Procedia PDF Downloads 4161212 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy
Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather
Abstract:
Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging
Procedia PDF Downloads 2471211 The Images of Japan and the Japanese People: A Case of Japanese as a Foreign Language Students in Portugal
Authors: Tomoko Yaginuma, Rosa Cabecinhas
Abstract:
Recently, the studies of the images about Japan and/or the Japanese people have been done in a Japanese language education context since the number of the students of Japanese as a Foreign Language (JFL) has been increasing worldwide, including in Portugal. It has been claimed that one of the reasons for this increase is the current popularity of Japanese pop-culture, namely anime (Japanese animations) and manga (Japanese visual novels), among young students. In the present study, the images about Japan and the Japanese held by JFL students in Portugal were examined by a questionnaire survey. The JFL students in higher education in Portugal (N=296) were asked to answer, among the other questions, their degree of agreement (using a Likert scale) with 24 pre-defined descriptions about the Japanese, which appear as relevant in a qualitative pilot study conducted before. The results show that the image of Japanese people by Portuguese JFL students is stressed around four dimensions: 1) diligence, 2) kindness, 3) conservativeness and 4) innovativeness. The students considered anime was the main source of information about the Japanese people and culture and anime was also strongly associated with the students’ interests in learning Japanese language.Keywords: anime, cultural studies, images about Japan and Japanese people, Portugal
Procedia PDF Downloads 1501210 Application of the Seismic Reflection Survey to an Active Fault Imaging
Authors: Nomin-Erdene Erdenetsogt, Tseedulam Khuut, Batsaikhan Tserenpil, Bayarsaikhan Enkhee
Abstract:
As the framework of 60 years of development of Astronomical and Geophysical science in modern Mongolia, various geophysical methods (electrical tomography, ground-penetrating radar, and high-resolution reflection seismic profiles) were used to image an active fault in-depth range between few decimeters to few tens meters. An active fault was fractured by an earthquake magnitude 7.6 during 1967. After geophysical investigations, trench excavations were done at the sites to expose the fault surfaces. The complex geophysical survey in the Mogod fault, Bulgan region of central Mongolia shows an interpretable reflection arrivals range of < 5 m to 50 m with the potential for increased resolution. Reflection profiles were used to help interpret the significance of neotectonic surface deformation at earthquake active fault. The interpreted profiles show a range of shallow fault structures and provide subsurface evidence with support of paleoseismologic trenching photos, electrical surveys.Keywords: Mogod fault, geophysics, seismic processing, seismic reflection survey
Procedia PDF Downloads 1271209 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 1091208 Application of Advanced Remote Sensing Data in Mineral Exploration in the Vicinity of Heavy Dense Forest Cover Area of Jharkhand and Odisha State Mining Area
Authors: Hemant Kumar, R. N. K. Sharma, A. P. Krishna
Abstract:
The study has been carried out on the Saranda in Jharkhand and a part of Odisha state. Geospatial data of Hyperion, a remote sensing satellite, have been used. This study has used a wide variety of patterns related to image processing to enhance and extract the mining class of Fe and Mn ores.Landsat-8, OLI sensor data have also been used to correctly explore related minerals. In this way, various processes have been applied to increase the mineralogy class and comparative evaluation with related frequency done. The Hyperion dataset for hyperspectral remote sensing has been specifically verified as an effective tool for mineral or rock information extraction within the band range of shortwave infrared used. The abundant spatial and spectral information contained in hyperspectral images enables the differentiation of different objects of any object into targeted applications for exploration such as exploration detection, mining.Keywords: Hyperion, hyperspectral, sensor, Landsat-8
Procedia PDF Downloads 1231207 The Impact of Legislation on Waste and Losses in the Food Processing Sector in the UK/EU
Authors: David Lloyd, David Owen, Martin Jardine
Abstract:
Introduction: European weight regulations with respect to food products require a full understanding of regulation guidelines to assure regulatory compliance. It is suggested that the complexity of regulation leads to practices which result to over filling of food packages by food processors. Purpose: To establish current practices by food processors and the financial, sustainable and societal impacts on the food supply chain of ineffective food production practices. Methods: An analysis of food packing controls with 10 companies of varying food categories and quantitative based research of a further 15 food processes on the confidence in weight control analysis of finished food packs within their organisation. Results: A process floor analysis of manufacturing operations focussing on 10 products found over fill of packages ranging from 4.8% to 20.2%. Standard deviation figures for all products showed a potential for reducing average weight of the pack whilst still retain the legal status of the product. In 20% of cases, an automatic weight analysis machine was in situ however weight packs were still significantly overweight. Collateral impacts noted included the effect of overfill on raw material purchase and added food miles often on a global basis with one raw material alone creating 10,000 extra food miles due to the poor weight control of the processing unit. A case study of a meat and bakery product will be discussed with the impact of poor controls resulting from complex legislation. The case studies will highlight extra energy costs in production and the impact of the extra weight on fuel usage. If successful a risk assessment model used primarily on food safety but adapted to identify waste /sustainability risks will be discussed within the presentation.Keywords: legislation, overfill, profile, waste
Procedia PDF Downloads 406