Search results for: image well theory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7181

Search results for: image well theory

6641 User Authentication Using Graphical Password with Sound Signature

Authors: Devi Srinivas, K. Sindhuja

Abstract:

This paper presents architecture to improve surveillance applications based on the usage of the service oriented paradigm, with smart phones as user terminals, allowing application dynamic composition and increasing the flexibility of the system. According to the result of moving object detection research on video sequences, the movement of the people is tracked using video surveillance. The moving object is identified using the image subtraction method. The background image is subtracted from the foreground image, from that the moving object is derived. So the Background subtraction algorithm and the threshold value is calculated to find the moving image by using background subtraction algorithm the moving frame is identified. Then, by the threshold value the movement of the frame is identified and tracked. Hence, the movement of the object is identified accurately. This paper deals with low-cost intelligent mobile phone-based wireless video surveillance solution using moving object recognition technology. The proposed solution can be useful in various security systems and environmental surveillance. The fundamental rule of moving object detecting is given in the paper, then, a self-adaptive background representation that can update automatically and timely to adapt to the slow and slight changes of normal surroundings is detailed. While the subtraction of the present captured image and the background reaches a certain threshold, a moving object is measured to be in the current view, and the mobile phone will automatically notify the central control unit or the user through SMS (Short Message System). The main advantage of this system is when an unknown image is captured by the system it will alert the user automatically by sending an SMS to user’s mobile.

Keywords: security, graphical password, persuasive cued click points

Procedia PDF Downloads 529
6640 Electrospray Deposition Technique of Dye Molecules in the Vacuum

Authors: Nouf Alharbi

Abstract:

The electrospray deposition technique became an important method that enables fragile, nonvolatile molecules to be deposited in situ in high vacuum environments. Furthermore, it is considered one of the ways to close the gap between basic surface science and molecular engineering, which represents a gradual change in the range of scientist research. Also, this paper talked about one of the most important techniques that have been developed and aimed for helping to further develop and characterize the electrospray by providing data collected using an image charge detection instrument. Image charge detection mass spectrometry (CDMS) is used to measure speed and charge distributions of the molecular ions. As well as, some data has been included using SIMION simulation to simulate the energies and masses of the molecular ions through the system in order to refine the mass-selection process.

Keywords: charge, deposition, electrospray, image, ions, molecules, SIMION

Procedia PDF Downloads 129
6639 Screening Deformed Red Blood Cells Irradiated by Ionizing Radiations Using Windowed Fourier Transform

Authors: Dahi Ghareab Abdelsalam Ibrahim, R. H. Bakr

Abstract:

Ionizing radiation, such as gamma radiation and X-rays, has many applications in medical diagnoses and cancer treatment. In this paper, we used the windowed Fourier transform to extract the complex image of the deformed red blood cells. The real values of the complex image are used to extract the best fitting of the deformed cell boundary. Male albino rats are irradiated by γ-rays from ⁶⁰Co. The male albino rats are anesthetized with ether, and then blood samples are collected from the eye vein by heparinized capillary tubes for studying the radiation-damaging effect in-vivo by the proposed windowed Fourier transform. The peripheral blood films are prepared according to the Brown method. The peripheral blood film is photographed by using an Automatic Image Contour Analysis system (SAMICA) from ELBEK-Bildanalyse GmbH, Siegen, Germany. The SAMICA system is provided with an electronic camera connected to a computer through a built-in interface card, and the image can be magnified up to 1200 times and displayed by the computer. The images of the peripheral blood films are then analyzed by the windowed Fourier transform method to extract the precise deformation from the best fitting. Based on accurate deformation evaluation of the red blood cells, diseases can be diagnosed in their primary stages.

Keywords: windowed Fourier transform, red blood cells, phase wrapping, Image processing

Procedia PDF Downloads 76
6638 Content-Based Mammograms Retrieval Based on Breast Density Criteria Using Bidimensional Empirical Mode Decomposition

Authors: Sourour Khouaja, Hejer Jlassi, Nadia Feddaoui, Kamel Hamrouni

Abstract:

Most medical images, and especially mammographies, are now stored in large databases. Retrieving a desired image is considered of great importance in order to find previous similar cases diagnosis. Our method is implemented to assist radiologists in retrieving mammographic images containing breast with similar density aspect as seen on the mammogram. This is becoming a challenge seeing the importance of density criteria in cancer provision and its effect on segmentation issues. We used the BEMD (Bidimensional Empirical Mode Decomposition) to characterize the content of images and Euclidean distance measure similarity between images. Through the experiments on the MIAS mammography image database, we confirm that the results are promising. The performance was evaluated using precision and recall curves comparing query and retrieved images. Computing recall-precision proved the effectiveness of applying the CBIR in the large mammographic image databases. We found a precision of 91.2% for mammography with a recall of 86.8%.

Keywords: BEMD, breast density, contend-based, image retrieval, mammography

Procedia PDF Downloads 227
6637 The Instruction of Imagination: A Theory of Language as a Social Communication Technology

Authors: Daniel Dor

Abstract:

The research presents a new general theory of language as a socially-constructed communication technology, designed by cultural evolution for a very specific function: the instruction of imagination. As opposed to all the other systems of intentional communication, which provide materials for the interlocutors to experience, language allows speakers to instruct their interlocutors in the process of imagining the intended meaning-instead of experiencing it. It is thus the only system that bridges the experiential gaps between speakers. This is the key to its enormous success.

Keywords: experience, general theory of language, imagination, language as technology, social essence of language

Procedia PDF Downloads 578
6636 Exploring the Nexus of Gastronomic Tourism and Its Impact on Destination Image

Authors: Usha Dinakaran, Richa Ganguly

Abstract:

Gastronomic tourism has evolved into a prominent niche within the travel industry, with tourists increasingly seeking unique culinary experiences as a primary motivation for their journeys. This research explores the intricate relationship between gastronomic tourism and its profound influence on the overall image of travel destinations. It delves into the multifaceted aspects of culinary experiences, tourists' perceptions, and the preservation of cultural identity, all of which play pivotal roles in shaping a destination's image. The primary aim of this study is to comprehensively examine the interplay between gastronomy and tourism, specifically focusing on its impact on destination image. The research seeks to achieve the following objectives: (1) Investigate how tourists perceive and engage with gastronomic tourism experiences. (2) Understand the significance of food in shaping the tourism image. (3.) Explore the connection between gastronomy and the destination's cultural identity Quantify the relationship between tourists' engagement in co-creation activities related to gastronomic tourism and their overall satisfaction with the quality of their culinary experiences. To achieve these objectives, a mixed-method research approach will be employed, including surveys, interviews, and content analysis. Data will be collected from tourists visiting diverse destinations known for their culinary offerings. This research anticipates uncovering valuable insights into the nexus between gastronomic tourism and destination image. It is expected to shed light on how tourists' perceptions of culinary experiences impact their overall perception of a destination. Additionally, the study aims to identify factors influencing tourist satisfaction and how cultural identity is preserved and promoted through gastronomic tourism. The findings of this research hold practical implications for destination marketers and stakeholders. Understanding the symbiotic relationship between gastronomy and tourism can guide the development of more targeted marketing strategies. Furthermore, promoting co-creation activities can enhance tourists' culinary experiences and contribute to the positive image of destinations.This study contributes to the growing body of knowledge regarding gastronomic tourism by consolidating insights from various studies and offering a comprehensive perspective on its impact on destination image. It offers a platform for future research in this domain and underscores the importance of culinary experiences in contemporary travel. In conclusion, this research endeavors to illuminate the dynamic interplay between gastronomic tourism and destination image, providing valuable insights for both academia and industry stakeholders in the field of tourism and hospitality.

Keywords: gastronomy, tourism, destination image, culinary

Procedia PDF Downloads 72
6635 Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology

Authors: Mina Dabirinezhad, Mohsen Bayat Pour, Amin Dabirinejad

Abstract:

This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.

Keywords: Dental direct digital imaging, digital image receptor, digital x-ray machine, and environmental impacts

Procedia PDF Downloads 135
6634 Multi-Atlas Segmentation Based on Dynamic Energy Model: Application to Brain MR Images

Authors: Jie Huo, Jonathan Wu

Abstract:

Segmentation of anatomical structures in medical images is essential for scientific inquiry into the complex relationships between biological structure and clinical diagnosis, treatment and assessment. As a method of incorporating the prior knowledge and the anatomical structure similarity between a target image and atlases, multi-atlas segmentation has been successfully applied in segmenting a variety of medical images, including the brain, cardiac, and abdominal images. The basic idea of multi-atlas segmentation is to transfer the labels in atlases to the coordinate of the target image by matching the target patch to the atlas patch in the neighborhood. However, this technique is limited by the pairwise registration between target image and atlases. In this paper, a novel multi-atlas segmentation approach is proposed by introducing a dynamic energy model. First, the target is mapped to each atlas image by minimizing the dynamic energy function, then the segmentation of target image is generated by weighted fusion based on the energy. The method is tested on MICCAI 2012 Multi-Atlas Labeling Challenge dataset which includes 20 target images and 15 atlases images. The paper also analyzes the influence of different parameters of the dynamic energy model on the segmentation accuracy and measures the dice coefficient by using different feature terms with the energy model. The highest mean dice coefficient obtained with the proposed method is 0.861, which is competitive compared with the recently published method.

Keywords: brain MRI segmentation, dynamic energy model, multi-atlas segmentation, energy minimization

Procedia PDF Downloads 327
6633 An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform

Authors: Jie Zhao, Meng Su

Abstract:

Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints.

Keywords: alexNet, VGG, googleNet, resNet, Jetson nano, CUDA, COCO-NET, cifar10, imageNet large scale visual recognition challenge (ILSVRC), google colab

Procedia PDF Downloads 79
6632 Effects of Financial and Non-Financial Accounting Information Reports on Corporate Credibility and Image of the Listed-Firms in Thailand

Authors: Anocha Rojanapanich

Abstract:

This research investigates the effect of financial accounting information and non-financial accounting reports on corporate credibility via strength of board of directors and market environment volatility as moderating effect. Data in this research is collected by questionnaire form non-financial companies listed on the Stock Exchange of Thailand. Multiple regression statistic technique is used for analyzing the data. Results find that firms with greater financial accounting information reports and non-financial accounting information reports will gain greater corporate credibility. Therefore, the corporate reporting has the value for the firms. Moreover, the strength of board of directors will positively moderate the financial and non-financial accounting information reports and corporate credibility relationship. And market environment volatility will negatively moderate the financial and nonfinancial accounting information reports and corporate credibility relationship and the contribution of accounting information reports on corporate credibility is generated to the corporate image. That is the corporate image has affected by corporate credibility.

Keywords: corporate credibility, financial and non-financial reports, firms performance, corporate image

Procedia PDF Downloads 289
6631 A Study on Application of Elastic Theory for Computing Flexural Stresses in Preflex Beam

Authors: Nasiri Ahmadullah, Shimozato Tetsuhiro, Masayuki Tai

Abstract:

This paper presents the step-by-step procedure for using Elastic Theory to calculate the internal stresses in composite bridge girders prestressed by the Preflexing Technology, called Prebeam in Japan and Preflex beam worldwide. Elastic Theory approaches preflex beams the same way as it does the conventional composite girders. Since preflex beam undergoes different stages of construction, calculations are made using different sectional and material properties. Stresses are calculated in every stage using the properties of the specific section. Stress accumulation gives the available stress in a section of interest. Concrete presence in the section implies prestress loss due to creep and shrinkage, however; more work is required to be done in this field. In addition to the graphical presentation of this application, this paper further discusses important notes of graphical comparison between the results of an experimental-only research carried out on a preflex beam, with the results of simulation based on the elastic theory approach, for an identical beam using Finite Element Modeling (FEM) by the author.

Keywords: composite girder, Elastic Theory, preflex beam, prestressing

Procedia PDF Downloads 275
6630 Enhancement of Underwater Haze Image with Edge Reveal Using Pixel Normalization

Authors: M. Dhana Lakshmi, S. Sakthivel Murugan

Abstract:

As light passes from source to observer in the water medium, it is scattered by the suspended particulate matter. This scattering effect will plague the captured images with non-uniform illumination, blurring details, halo artefacts, weak edges, etc. To overcome this, pixel normalization with an Amended Unsharp Mask (AUM) filter is proposed to enhance the degraded image. To validate the robustness of the proposed technique irrespective of atmospheric light, the considered datasets are collected on dual locations. For those images, the maxima and minima pixel intensity value is computed and normalized; then the AUM filter is applied to strengthen the blurred edges. Finally, the enhanced image is obtained with good illumination and contrast. Thus, the proposed technique removes the effect of scattering called de-hazing and restores the perceptual information with enhanced edge detail. Both qualitative and quantitative analyses are done on considering the standard non-reference metric called underwater image sharpness measure (UISM), and underwater image quality measure (UIQM) is used to measure color, sharpness, and contrast for both of the location images. It is observed that the proposed technique has shown overwhelming performance compared to other deep-based enhancement networks and traditional techniques in an adaptive manner.

Keywords: underwater drone imagery, pixel normalization, thresholding, masking, unsharp mask filter

Procedia PDF Downloads 189
6629 Using Multiple Intelligences Theory to Develop Thai Language Skill

Authors: Bualak Naksongkaew

Abstract:

The purposes of this study were to compare pre- and post-test achievement of Thai language skills. The samples consisted of 40 tenth grader of Secondary Demonstration School of Suan Sunandha Rajabhat University in the first semester of the academic year 2010. The researcher prepared the Thai lesson plans, the pre- and post-achievement test at the end program. Data analyses were carried out using means, standard deviations and descriptive statistics, independent samples t-test analysis for comparison pre- and post-test. The study showed that there were a statistically significant difference at α= 0.05; therefore the use multiple intelligences theory can develop Thai languages skills. The results after using the multiple intelligences theory for Thai lessons had higher level than standard.

Keywords: multiple intelligences theory, Thai language skills, development, pre- and post-test achievement

Procedia PDF Downloads 420
6628 A Framework for Investigating Reverse Logistics Capability of E-Tailers

Authors: Wen-Shan Lin, Shu-Lu Hsu

Abstract:

Environmental concern and consumer rights have entailed e-tailers to adopt better strategies to facilitate product returns from customers. As the demand for reverse logistics (RL) continues to grow, little is known about what motivates e-tailers to enhance their RL capabilities and about the role RL capabilities plays in enabling e-tailers to achieve better customer satisfaction and economic performance. Based on resource-based theory and institutional theory, this article proposes that the following factors play a critical role in influencing the RL capability of e-tailers: (a) Financial resource commitment to RL, (b) managerial resource commitment to RL, and (c) institutional pressure to implement RL. Based on the role of these factors, the study provides a framework and propositions that serve to guide future research addressing the link among resources, institutional pressure, and RL capability.

Keywords: reverse logistics, e-tailing, resource-based theory, institutional theory

Procedia PDF Downloads 446
6627 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement

Authors: Hu Zhenxing, Gao Jianxin

Abstract:

Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.

Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D

Procedia PDF Downloads 491
6626 Computer-Aided Detection of Simultaneous Abdominal Organ CT Images by Iterative Watershed Transform

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

Interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Segmentation of liver, spleen and kidneys is regarded as a major primary step in the computer-aided diagnosis of abdominal organ diseases. In this paper, a semi-automated method for medical image data is presented for the abdominal organ segmentation data using mathematical morphology. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. Our algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter, we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.

Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, simultaneous organ segmentation, the watershed algorithm

Procedia PDF Downloads 430
6625 Body Image Dissatifaction with and Personal Behavioral Control in Obese Patients Who are Attending to Treatment

Authors: Mariela Gonzalez, Zoraide Lugli, Eleonora Vivas, Rosana Guzmán

Abstract:

The objective was to determine the predictive capacity of self-efficacy perceived for weight control, locus of weight control and skills of weight self-management in the dissatisfaction of the body image in obese people who attend treatment. Sectional study conducted in the city of Maracay, Venezuela, with 243 obese who attend to treatment, 173 of the feminine gender and 70 of the male, with ages ranging between 18 and 57 years old. The sample body mass index ranged between 29.39 and 44.14. The following instruments were used: The Body Shape Questionnaire (BSQ), the inventory of body weight self-regulation, The Inventory of self-efficacy in the regulation of body weight and the Inventory of the Locus of weight control. Calculating the descriptive statistics and of central tendency, coefficients of correlation and multiple regression; it was found that a low ‘perceived Self-efficacy in the weight control’ and a high ‘Locus of external control’, predict the dissatisfaction with body image in obese who attend treatment. The findings are a first approximation to give an account of the importance of the personal control variables in the study of the psychological grief on the overweight individual.

Keywords: dissatisfaction with body image, obese people, personal control, psychological variables

Procedia PDF Downloads 426
6624 Optimizing Perennial Plants Image Classification by Fine-Tuning Deep Neural Networks

Authors: Khairani Binti Supyan, Fatimah Khalid, Mas Rina Mustaffa, Azreen Bin Azman, Amirul Azuani Romle

Abstract:

Perennial plant classification plays a significant role in various agricultural and environmental applications, assisting in plant identification, disease detection, and biodiversity monitoring. Nevertheless, attaining high accuracy in perennial plant image classification remains challenging due to the complex variations in plant appearance, the diverse range of environmental conditions under which images are captured, and the inherent variability in image quality stemming from various factors such as lighting conditions, camera settings, and focus. This paper proposes an adaptation approach to optimize perennial plant image classification by fine-tuning the pre-trained DNNs model. This paper explores the efficacy of fine-tuning prevalent architectures, namely VGG16, ResNet50, and InceptionV3, leveraging transfer learning to tailor the models to the specific characteristics of perennial plant datasets. A subset of the MYLPHerbs dataset consisted of 6 perennial plant species of 13481 images under various environmental conditions that were used in the experiments. Different strategies for fine-tuning, including adjusting learning rates, training set sizes, data augmentation, and architectural modifications, were investigated. The experimental outcomes underscore the effectiveness of fine-tuning deep neural networks for perennial plant image classification, with ResNet50 showcasing the highest accuracy of 99.78%. Despite ResNet50's superior performance, both VGG16 and InceptionV3 achieved commendable accuracy of 99.67% and 99.37%, respectively. The overall outcomes reaffirm the robustness of the fine-tuning approach across different deep neural network architectures, offering insights into strategies for optimizing model performance in the domain of perennial plant image classification.

Keywords: perennial plants, image classification, deep neural networks, fine-tuning, transfer learning, VGG16, ResNet50, InceptionV3

Procedia PDF Downloads 56
6623 Working Capital Management and Profitability of Uk Firms: A Contingency Theory Approach

Authors: Ishmael Tingbani

Abstract:

This paper adopts a contingency theory approach to investigate the relationship between working capital management and profitability using data of 225 listed British firms on the London Stock Exchange for the period 2001-2011. The paper employs a panel data analysis on a series of interactive models to estimate this relationship. The findings of the study confirm the relevance of the contingency theory. Evidence from the study suggests that the impact of working capital management on profitability varies and is constrained by organizational contingencies (environment, resources, and management factors) of the firm. These findings have implications for a more balanced and nuanced view of working capital management policy for policy-makers.

Keywords: working capital management, profitability, contingency theory approach, interactive models

Procedia PDF Downloads 336
6622 Decision Making Approach through Generalized Fuzzy Entropy Measure

Authors: H. D. Arora, Anjali Dhiman

Abstract:

Uncertainty is found everywhere and its understanding is central to decision making. Uncertainty emerges as one has less information than the total information required describing a system and its environment. Uncertainty and information are so closely associated that the information provided by an experiment for example, is equal to the amount of uncertainty removed. It may be pertinent to point out that uncertainty manifests itself in several forms and various kinds of uncertainties may arise from random fluctuations, incomplete information, imprecise perception, vagueness etc. For instance, one encounters uncertainty due to vagueness in communication through natural language. Uncertainty in this sense is represented by fuzziness resulting from imprecision of meaning of a concept expressed by linguistic terms. Fuzzy set concept provides an appropriate mathematical framework for dealing with the vagueness. Both information theory, proposed by Shannon (1948) and fuzzy set theory given by Zadeh (1965) plays an important role in human intelligence and various practical problems such as image segmentation, medical diagnosis etc. Numerous approaches and theories dealing with inaccuracy and uncertainty have been proposed by different researcher. In the present communication, we generalize fuzzy entropy proposed by De Luca and Termini (1972) corresponding to Shannon entropy(1948). Further, some of the basic properties of the proposed measure were examined. We also applied the proposed measure to the real life decision making problem.

Keywords: entropy, fuzzy sets, fuzzy entropy, generalized fuzzy entropy, decision making

Procedia PDF Downloads 436
6621 Multi-Channel Charge-Coupled Device Sensors Real-Time Cell Growth Monitor System

Authors: Han-Wei Shih, Yao-Nan Wang, Ko-Tung Chang, Lung-Ming Fu

Abstract:

A multi-channel cell growth real-time monitor and evaluation system using charge-coupled device (CCD) sensors with 40X lens integrating a NI LabVIEW image processing program is proposed and demonstrated. The LED light source control of monitor system is utilizing 8051 microprocessor integrated with NI LabVIEW software. In this study, the same concentration RAW264.7 cells growth rate and morphology in four different culture conditions (DMEM, LPS, G1, G2) were demonstrated. The real-time cells growth image was captured and analyzed by NI Vision Assistant every 10 minutes in the incubator. The image binarization technique was applied for calculating cell doubling time and cell division index. The cells doubling time and cells division index of four group with DMEM, LPS, LPS+G1, LPS+G2 are 12.3 hr,10.8 hr,14.0 hr,15.2 hr and 74.20%, 78.63%, 69.53%, 66.49%. The image magnification of multi-channel CCDs cell real-time monitoring system is about 100X~200X which compares with the traditional microscope.

Keywords: charge-coupled device (CCD), RAW264.7, doubling time, division index

Procedia PDF Downloads 354
6620 Eclectic Therapy in Approach to Clients’ Problems and Application of Multiple Intelligence Theory

Authors: Mohamed Sharof Mostafa, Atefeh Ahmadi

Abstract:

Most of traditional single modality psychotherapy and counselling approaches to clients’ problems are based on the application of one therapy in all sessions. Modern developments in these sciences focus on eclectic and integrative interventions to consider all dimensions of an issue and all characteristics of the clients. This paper presents and overview eclectic therapy and its pros and cons. In addition, multiple intelligence theory and its application in eclectic therapy approaches are mentioned.

Keywords: eclectic therapy, client, multiple intelligence theory, dimensions

Procedia PDF Downloads 700
6619 The Truth about Good and Evil: A Mixed-Methods Approach to Color Theory

Authors: Raniya Alsharif

Abstract:

The color theory of good and evil is the association of colors to the omnipresent concept of good and evil, where human behavior and perception can be highly influenced by seeing black and white, making these connotations almost dangerously distinctive where they can be very hard to distinguish. This theory is a human construct that dates back to ancient Egypt and has been used since then in almost all forms of communication and expression, such as art, fashion, literature, and religious manuscripts, helping the implantation of preconceived ideas that influence behavior and society. This is a mixed-methods research that uses both surveys to collect quantitative data related to the theory and a vignette to collect qualitative data by using a scenario where participants aged between 18-25 will style two characters of good and bad characteristics with color contrasting clothes, both yielding results about the nature of the preconceived perceptions associated with ‘black and white’ and ‘good and evil’, illustrating the important role of media and communications in human behavior and subconscious, and also uncover how far this theory goes in the age of social media enlightenment.

Keywords: color perception, interpretivism, thematic analysis, vignettes

Procedia PDF Downloads 118
6618 Traffic Density Measurement by Automatic Detection of the Vehicles Using Gradient Vectors from Aerial Images

Authors: Saman Ghaffarian, Ilgin Gökaşar

Abstract:

This paper presents a new automatic vehicle detection method from very high resolution aerial images to measure traffic density. The proposed method starts by extracting road regions from image using road vector data. Then, the road image is divided into equal sections considering resolution of the images. Gradient vectors of the road image are computed from edge map of the corresponding image. Gradient vectors on the each boundary of the sections are divided where the gradient vectors significantly change their directions. Finally, number of vehicles in each section is carried out by calculating the standard deviation of the gradient vectors in each group and accepting the group as vehicle that has standard deviation above predefined threshold value. The proposed method was tested in four very high resolution aerial images acquired from Istanbul, Turkey which illustrate roads and vehicles with diverse characteristics. The results show the reliability of the proposed method in detecting vehicles by producing 86% overall F1 accuracy value.

Keywords: aerial images, intelligent transportation systems, traffic density measurement, vehicle detection

Procedia PDF Downloads 374
6617 Recognition of Objects in a Maritime Environment Using a Combination of Pre- and Post-Processing of the Polynomial Fit Method

Authors: R. R. Hordijk, O. J. G. Somsen

Abstract:

Traditionally, radar systems are the eyes and ears of a ship. However, these systems have their drawbacks and nowadays they are extended with systems that work with video and photos. Processing of data from these videos and photos is however very labour-intensive and efforts are being made to automate this process. A major problem when trying to recognize objects in water is that the 'background' is not homogeneous so that traditional image recognition technics do not work well. Main question is, can a method be developed which automate this recognition process. There are a large number of parameters involved to facilitate the identification of objects on such images. One is varying the resolution. In this research, the resolution of some images has been reduced to the extreme value of 1% of the original to reduce clutter before the polynomial fit (pre-processing). It turned out that the searched object was clearly recognizable as its grey value was well above the average. Another approach is to take two images of the same scene shortly after each other and compare the result. Because the water (waves) fluctuates much faster than an object floating in the water one can expect that the object is the only stable item in the two images. Both these methods (pre-processing and comparing two images of the same scene) delivered useful results. Though it is too early to conclude that with these methods all image problems can be solved they are certainly worthwhile for further research.

Keywords: image processing, image recognition, polynomial fit, water

Procedia PDF Downloads 529
6616 Solving Dimensionality Problem and Finding Statistical Constructs on Latent Regression Models: A Novel Methodology with Real Data Application

Authors: Sergio Paez Moncaleano, Alvaro Mauricio Montenegro

Abstract:

This paper presents a novel statistical methodology for measuring and founding constructs in Latent Regression Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations on Item Response Theory (IRT). In addition, based on the fundamentals of submodel theory and with a convergence of many ideas of IRT, we propose an algorithm not just to solve the dimensionality problem (nowadays an open discussion) but a new research field that promises more fear and realistic qualifications for examiners and a revolution on IRT and educational research. In the end, the methodology is applied to a set of real data set presenting impressive results for the coherence, speed and precision. Acknowledgments: This research was financed by Colciencias through the project: 'Multidimensional Item Response Theory Models for Practical Application in Large Test Designed to Measure Multiple Constructs' and both authors belong to SICS Research Group from Universidad Nacional de Colombia.

Keywords: item response theory, dimensionality, submodel theory, factorial analysis

Procedia PDF Downloads 364
6615 Conformal Invariance and F(R,T) Gravity

Authors: P. Y. Tsyba, O. V. Razina, E. Güdekli, R. Myrzakulov

Abstract:

In this paper, we consider the equation of motion for the F(R,T) gravity on their property of conformal invariance. It is shown that in the general case such a theory is not conformally invariant. Special cases for the functions v and u, in which the properties of the theory can appear, were studied.

Keywords: conformal invariance, gravity, space-time, metric

Procedia PDF Downloads 655
6614 A Psychoanalytical Approach to Edgar A. Poe’s Short Story ‘The Tell-Tale Heart’

Authors: José Antonio Núñez

Abstract:

Sigmund Freud’s Theory of Psychoanalysis was a groundbreaking contribution to the province of the human psyche and behavior. Nowadays, psychoanalytic theory is applied to numerous fields. One of them is literature. Literary criticism has put into practice the basis of Freud’s idea to analyze literary works. This essay is about the analysis of Edgar A. Poe’s short story ‘The Tell-Tale Heart,’ under the lens of Freud’s psychoanalytical perspective. In 1919, it was published ‘Das Unheimliche’ (The Uncanny) by Freud. On this article, the famous Austrian psychoanalyst showed his explanations about what he called ‘the uncanny,’ and its relation to the human unconscious. In this paper, Freud’s famous article has been used to analyze Poe’s short story ‘The Tell-Tale Heart,’ and to find the analogies that exist between Poe’s macabre short story and Freud’s theory of ‘the uncanny.’

Keywords: psychoanalysis, theory of the unconscious, the uncanny, unheimlich

Procedia PDF Downloads 600
6613 Measurement of Steady Streaming from an Oscillating Bubble Using Particle Image Velocimetry

Authors: Yongseok Kwon, Woowon Jeong, Eunjin Cho, Sangkug Chung, Kyehan Rhee

Abstract:

Steady streaming flow fields induced by a 500 um bubble oscillating at 12 kHz were measured using microscopic particle image velocimetry (PIV). The accuracy of velocity measurement using a micro PIV system was checked by comparing the measured velocity fields with the theoretical velocity profiles in fully developed laminar flow. The steady streaming flow velocities were measured in the saggital plane of the bubble attached on the wall. Measured velocity fields showed upward jet flow with two symmetric counter-rotating vortices, and the maximum streaming velocity was about 12 mm/s, which was within the velocity ranges measured by other researchers. The measured streamlines were compared with the analytic solution, and they also showed a reasonable agreement.

Keywords: oscillating bubble, particle image velocimetry, microstreaming, vortices,

Procedia PDF Downloads 406
6612 Language Learning, Drives and Context: A Grounded Theory of Learning Behavior

Authors: Julian Pigott

Abstract:

This paper introduces the Language Learning as a Means of Drive Engagement (LLMDE) theory, derived from a grounded theory analysis of interviews with Japanese university students. According to LLMDE theory, language learning can be understood as a means of engaging one or more of four self-fulfillment drives: the drive to expand one’s horizons (perspective drive); the drive to make a success of oneself (status drive); the drive to engage in interaction with others (communication drive); and the drive to obtain intellectual and affective stimulation (entertainment drive). While many theories of learner psychology focus on conscious agency, LLMDE theory addresses the role of the unconscious. In addition, supplementary thematic analysis of the data revealed the role of context in mediating drive engagement. Unexpected memorable events, for example, play a key role in instigating and, indirectly, in regulating learning, as do institutional and cultural contexts. Given the apparent importance of such factors beyond the immediate control of the learner, and given the pervasive role of habit and drives, it is argued that the concept of motivation merits theoretical reappraisal. Rather than an underlying force determining language learning success or failure, it can be understood to emerge sporadically in consciousness to promote behavioral change, or to protect habitual behavior from disruption.

Keywords: drives, grounded theory, motivation, significant events

Procedia PDF Downloads 141