Search results for: image semantic segmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3417

Search results for: image semantic segmentation

1377 Satisfaction of International Tourists during Their Visit to Bangkok, Thailand

Authors: Bovornluck Kuosuwan, Kevin Wongleedee

Abstract:

The purposes of this research was to study the level of satisfaction of international tourists in five important areas: satisfaction on visiting tourist destinations, satisfaction on tourist images, satisfaction on value for money, satisfaction on service quality, and satisfaction when compared with their expectation. A probability random sampling of 200 inbound tourists was utilized. A questionnaire was used to collect the data and small in-depth interviews were also used to get their opinions about their positive and negative evaluations of their experience travelling in Thailand. The findings revealed that the majority of respondents had a medium level of satisfaction. When examined in detail, the level of satisfaction can be ranked from highest to lowest according to the mean average as follows: visiting tourist destinations, expectations, service quality, tourist image, and value for money.

Keywords: inbound tourists, satisfaction, Thailand, international tourists

Procedia PDF Downloads 322
1376 INCIPIT-CRIS: A Research Information System Combining Linked Data Ontologies and Persistent Identifiers

Authors: David Nogueiras Blanco, Amir Alwash, Arnaud Gaudinat, René Schneider

Abstract:

At a time when the access to and the sharing of information are crucial in the world of research, the use of technologies such as persistent identifiers (PIDs), Current Research Information Systems (CRIS), and ontologies may create platforms for information sharing if they respond to the need of disambiguation of their data by assuring interoperability inside and between other systems. INCIPIT-CRIS is a continuation of the former INCIPIT project, whose goal was to set up an infrastructure for a low-cost attribution of PIDs with high granularity based on Archival Resource Keys (ARKs). INCIPIT-CRIS can be interpreted as a logical consequence and propose a research information management system developed from scratch. The system has been created on and around the Schema.org ontology with a further articulation of the use of ARKs. It is thus built upon the infrastructure previously implemented (i.e., INCIPIT) in order to enhance the persistence of URIs. As a consequence, INCIPIT-CRIS aims to be the hinge between previously separated aspects such as CRIS, ontologies and PIDs in order to produce a powerful system allowing the resolution of disambiguation problems using a combination of an ontology such as Schema.org and unique persistent identifiers such as ARK, allowing the sharing of information through a dedicated platform, but also the interoperability of the system by representing the entirety of the data as RDF triplets. This paper aims to present the implemented solution as well as its simulation in real life. We will describe the underlying ideas and inspirations while going through the logic and the different functionalities implemented and their links with ARKs and Schema.org. Finally, we will discuss the tests performed with our project partner, the Swiss Institute of Bioinformatics (SIB), by the use of large and real-world data sets.

Keywords: current research information systems, linked data, ontologies, persistent identifier, schema.org, semantic web

Procedia PDF Downloads 132
1375 The Effect of Tip Parameters on Vibration Modes of Atomic Force Microscope Cantilever

Authors: Mehdi Shekarzadeh, Pejman Taghipour Birgani

Abstract:

In this paper, the effect of mass and height of tip on the flexural vibration modes of an atomic force microscope (AFM) rectangular cantilever is analyzed. A closed-form expression for the sensitivity of vibration modes is derived using the relationship between the resonant frequency and contact stiffness of cantilever and sample. Each mode has a different sensitivity to variations in surface stiffness. This sensitivity directly controls the image resolution. It is obtained an AFM cantilever is more sensitive when the mass of tip is lower and the first mode is the most sensitive mode. Also, the effect of changes of tip height on the flexural sensitivity is negligible.

Keywords: atomic force microscope, AFM, vibration analysis, flexural vibration, cantilever

Procedia PDF Downloads 383
1374 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru

Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar

Abstract:

Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.

Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit

Procedia PDF Downloads 141
1373 Overview and Future Opportunities of Sarcasm Detection on Social Media Communications

Authors: Samaneh Nadali, Masrah Azrifah Azmi Murad, Nurfadhlina Mohammad Sharef

Abstract:

Sarcasm is a common phenomenon in social media which is a nuanced form of language for stating the opposite of what is implied. Due to the intentional ambiguity, analysis of sarcasm is a difficult task not only for a machine but even for a human. Although sarcasm detection has an important effect on sentiment, it is usually ignored in social media analysis because sarcasm analysis is too complicated. While there is a few systems exist which can detect sarcasm, almost no work has been carried out on a study and the review of the existing work in this area. This survey presents a nearly full image of sarcasm detection techniques and the related fields with brief details. The main contributions of this paper include the illustration of the recent trend of research in the sarcasm analysis and we highlight the gaps and propose a new framework that can be explored.

Keywords: sarcasm detection, sentiment analysis, social media, sarcasm analysis

Procedia PDF Downloads 453
1372 2D Point Clouds Features from Radar for Helicopter Classification

Authors: Danilo Habermann, Aleksander Medella, Carla Cremon, Yusef Caceres

Abstract:

This paper aims to analyze the ability of 2d point clouds features to classify different models of helicopters using radars. This method does not need to estimate the blade length, the number of blades of helicopters, and the period of their micro-Doppler signatures. It is also not necessary to generate spectrograms (or any other image based on time and frequency domain). This work transforms a radar return signal into a 2D point cloud and extracts features of it. Three classifiers are used to distinguish 9 different helicopter models in order to analyze the performance of the features used in this work. The high accuracy obtained with each of the classifiers demonstrates that the 2D point clouds features are very useful for classifying helicopters from radar signal.

Keywords: helicopter classification, point clouds features, radar, supervised classifiers

Procedia PDF Downloads 225
1371 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 304
1370 Assessing Pain Using Morbid Motion Monitor System in the Pain Management of Nurse Practitioner

Authors: Mohammad Reza Dawoudi

Abstract:

With the increasing rate of patients suffering from chronic pain, several methods for evaluating of chronic pain are suggested. Motion of morbid has been defined as the rate of pine and it is linked with various co-morbid conditions. This study provides a summary of procedure useful to statistics performing direct behavioral observation in hospital settings. We describe the need for and usefulness of comprehensive “morbid motions” observations; provide a primer on the identification, definition, and assessment of morbid behaviors; and outline and discuss specific statistical procedures, including formulating referral motions, describing and conducting the observation. We also provide practical devices for observing and analyzing the obtained information into a report that guides clinical intervention.

Keywords: assessing pain, DNA modeling, image matching technique, pain scale

Procedia PDF Downloads 405
1369 The Value of Store Choice Criteria on Perceived Patronage Intentions

Authors: Susana Marques

Abstract:

Research on how store environment cues influence consumers’ store choice decision criteria, such as store operations, product quality, monetary price, store image and sales promotion, is sparse. Especially absent research on the simultaneous impact of multiple store environment cues. The authors propose a comprehensive store choice model that includes: three types of store environment cues as exogenous constructs; various store choice criteria as possible mediating constructs, and store patronage intentions as an endogenous construct. On the basis of testing with a sample of 561 customers of hypermarkets, the model is partially supported. This study used structural equation modelling to test the proposed model.

Keywords: store choice, store patronage, structural equation modelling, retailing

Procedia PDF Downloads 270
1368 Linguistic Misinterpretation and the Dialogue of Civilizations

Authors: Oleg Redkin, Olga Bernikova

Abstract:

Globalization and migrations have made cross-cultural contacts more frequent and intensive. Sometimes, these contacts may lead to misunderstanding between partners of communication and misinterpretations of the verbal messages that some researchers tend to consider as the 'clash of civilizations'. In most cases, reasons for that may be found in cultural and linguistic differences and hence misinterpretations of intentions and behavior. The current research examines factors of verbal and non-verbal communication that should be taken into consideration in verbal and non-verbal contacts. Language is one of the most important manifestations of the cultural code, and it is often considered as one of the special features of a civilization. The Arabic language, in particular, is commonly associated with Islam and the language and the Arab-Muslim civilization. It is one of the most important markers of self-identification for more than 200 million of native speakers. Arabic is the language of the Quran and hence the symbol of religious affiliation for more than one billion Muslims around the globe. Adequate interpretation of Arabic texts requires profound knowledge of its grammar, semantics of its vocabulary. Communicating sides who belong to different cultural groups are guided by different models of behavior and hierarchy of values, besides that the vocabulary each of them uses in the dialogue may convey different semantic realities and vary in connotations. In this context direct, literal translation in most cases cannot adequately convey the original meaning of the original message. Besides that peculiarities and diversities of the extralinguistic information, such as the body language, communicative etiquette, cultural background and religious affiliations may make the dialogue even more difficult. It is very likely that the so called 'clash of civilizations' in most cases is due to misinterpretation of counterpart's means of discourse such as language, cultural codes, and models of behavior rather than lies in basic contradictions between partners of communication. In the process of communication, one has to rely on universal values rather than focus on cultural or religious peculiarities, to take into account current linguistic and extralinguistic context.

Keywords: Arabic, civilization, discourse, language, linguistic

Procedia PDF Downloads 220
1367 Preparation and Structural Analysis of Nano-Ciprofloxacin by Fourier Transform X-Ray Diffraction, Infra-Red Spectroscopy, and Semi Electron Microscope (SEM)

Authors: Shahriar Ghammamy, Mehrnoosh Saboony

Abstract:

Purpose: To evaluate the spectral specification (IR-XRD and SEM) of nano-ciprofloxacin that prepared by up-down method (satellite mill). Methods: the ciprofloxacin was minimized to nano-scale with satellite mill and its characterization evaluated by Infrared spectroscopy, XRD diffraction and semi electron microscope (SEM). Expectation enhances the antibacterial property of nano-ciprofloxacin in comparison to ciprofloxacin. IR spectrum of nano-ciprofloxacin compared with spectrum of ciprofloxacin, and both of them were almost agreement with a difference: the peaks in spectrum of nano-ciprofloxacin were sharper than peaks in spectrum of ciprofloxacin. X-Ray powder diffraction analysis of nano-ciprofloxacin shows the diameter of particles equal to 90.9nm. (on the basis of Scherer Equation). SEM image shows the global shape for nano-ciprofloxacin.

Keywords: antibiotic, ciprofloxacin, nano, IR, XRD, SEM

Procedia PDF Downloads 513
1366 Preparation and Structural Analysis of Nano Ciprofloxacin by Fourier Transform Infra-Red Spectroscopy, X-Ray Diffraction and Semi Electron Microscope (SEM)

Authors: Shahriar Ghammamy, Mehrnoosh Saboony

Abstract:

Purpose: to evaluate the spectral specification(IR-XRD and SEM) of nano ciprofloxacin that prepared by up-down method (satellite mill). Methods: the ciprofloxacin was minimized to nano-scale with satellite mill and it,s characterization evaluated by Infrared spectroscopy, XRD diffraction and semi electron microscope (SEM). Expectation: to enhance the antibacterial property of nano ciprofloxacin in comparison to ciprofloxacin.IR spectrum of nano ciprofloxacin compared with spectrum of ciprofloxacin, and both of them were almost agreement with a difference: the peaks in spectrum of nano ciprofloxacin was sharper than peaks in spectrum of ciprofloxacin. X-Ray powder diffraction analysis of nano ciprofloxacin showes the diameter of particles equal to 90.9 nm (on the basis of scherrer equation). SEM image showes the global shape for nano ciprofloxacin.

Keywords: antibiotic, ciprofloxacin, nano, IR, XRD, SEM

Procedia PDF Downloads 409
1365 Challenge and Benefits of Adoption ISO 9001 Certification in Algerian Agribusiness

Authors: Nouara Boulfoul, Fatima Brabez

Abstract:

This article presents the status of ISO 9001: 2000 certification in some agro-food companies in Algeria. The article discusses challenges and contributions of certification as perceived by quality managers as well as the difficulties encountered during certification. It also provides the recommendations of these managers for companies that have a certification project. The results show that the top three reasons for adopting ISO 9001: 2000 certification are building a better organization, reducing the costs of non-compliance and meeting customer expectations. The contributions are of an external nature (recognition, brand image, extension of markets, etc.) but also of an internal nature (improvement of the organization, etc.). The recommendations mainly concern management motivation, staff awareness and involvement and compliance with the requirements of the standard.

Keywords: quality management, certification, ISO 9001: 2000, food companies

Procedia PDF Downloads 226
1364 Real-Time Classification of Marbles with Decision-Tree Method

Authors: K. S. Parlak, E. Turan

Abstract:

The separation of marbles according to the pattern quality is a process made according to expert decision. The classification phase is the most critical part in terms of economic value. In this study, a self-learning system is proposed which performs the classification of marbles quickly and with high success. This system performs ten feature extraction by taking ten marble images from the camera. The marbles are classified by decision tree method using the obtained properties. The user forms the training set by training the system at the marble classification stage. The system evolves itself in every marble image that is classified. The aim of the proposed system is to minimize the error caused by the person performing the classification and achieve it quickly.

Keywords: decision tree, feature extraction, k-means clustering, marble classification

Procedia PDF Downloads 380
1363 A Convolutional Deep Neural Network Approach for Skin Cancer Detection Using Skin Lesion Images

Authors: Firas Gerges, Frank Y. Shih

Abstract:

Malignant melanoma, known simply as melanoma, is a type of skin cancer that appears as a mole on the skin. It is critical to detect this cancer at an early stage because it can spread across the body and may lead to the patient's death. When detected early, melanoma is curable. In this paper, we propose a deep learning model (convolutional neural networks) in order to automatically classify skin lesion images as malignant or benign. Images underwent certain pre-processing steps to diminish the effect of the normal skin region on the model. The result of the proposed model showed a significant improvement over previous work, achieving an accuracy of 97%.

Keywords: deep learning, skin cancer, image processing, melanoma

Procedia PDF Downloads 146
1362 GRCNN: Graph Recognition Convolutional Neural Network for Synthesizing Programs from Flow Charts

Authors: Lin Cheng, Zijiang Yang

Abstract:

Program synthesis is the task to automatically generate programs based on user specification. In this paper, we present a framework that synthesizes programs from flow charts that serve as accurate and intuitive specification. In order doing so, we propose a deep neural network called GRCNN that recognizes graph structure from its image. GRCNN is trained end-to-end, which can predict edge and node information of the flow chart simultaneously. Experiments show that the accuracy rate to synthesize a program is 66.4%, and the accuracy rates to recognize edge and node are 94.1% and 67.9%, respectively. On average, it takes about 60 milliseconds to synthesize a program.

Keywords: program synthesis, flow chart, specification, graph recognition, CNN

Procedia PDF Downloads 118
1361 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder

Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen

Abstract:

Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.

Keywords: natural language inference, explanation generation, variational auto-encoder, generative model

Procedia PDF Downloads 150
1360 Nanocellulose Reinforced Biocomposites Based on Wheat Plasticized Starch for Food Packaging

Authors: Belen Montero, Carmen Ramirez, Maite Rico, Rebeca Bouza, Irene Derungs

Abstract:

Starch is a promising polymer for producing biocomposite materials because it is renewable, completely biodegradable and easily available at a low cost. Thermoplastic starches (TPS) can be obtained after the disruption and plasticization of native starch with a plasticizer. In this work, the solvent casting method was used to obtain TPS films from wheat starch plasticized with glycerol and reinforced with nanocellulose (CNC). X-ray diffraction analysis was used to follow the evolution of the crystallinity. The native wheat starch granules have shown a profile corresponding to A-type crystal structures typical for cereal starches. When TPS films are analyzed a high amorphous halo centered on 19º is obtained, indicating the plasticization process is completed. SEM imaging was made in order to analyse the morphology. The image from the raw wheat starch granules shows a bimodal granule size distribution with some granules in large round disk-shape forms (A-type) and the others as smaller spherical particles (B-type). The image from the neat TPS surface shows a continuous surface. No starch aggregates or swollen granules can be seen so, the plasticization process is complete. In the surfaces of reinforced TPS films aggregates are seen as the CNC concentration in the matrix increases. The CNC influence on the mechanical properties of TPS films has been studied by dynamic mechanical analysis. A direct relation exists between the storage modulus values, E’, and the CNC content in reinforced TPS films: higher is the content of nanocellulose in the composite, higher is the value of E’. This reinforcement effect can be explained by the appearance of a strong and crystalline nanoparticle-TPS interphase. Thermal stability of films was analysed by TGA. It has not observed any influence on the behaviour related to the thermal degradation of films with the incorporation of the CNC. Finally, the resistance to the water absorption films was analysed following the standard UNE-EN ISO 1998:483. The percentage of water absorbed by the samples at each time was calculated. The addition of 5 wt % of CNC to the TPS matrix leads to a significant improvement in the moisture resistance of the starch based material decreasing their diffusivity. It has been associated to the formation of a nanocrystal network that prevents swelling of the starch and therefore water absorption and to the high crystallinity of cellulose compared to starch. As a conclusion, the wheat film reinforced with 5 wt % of cellulose nanocrystals seems to be a good alternative for short-life applications into the packaging industry, because of its greatest rigidity, thermal stability and moisture sorption resistance.

Keywords: biocomposites, nanocellulose, starch, wheat

Procedia PDF Downloads 210
1359 An Event-Related Potential Investigation of Speech-in-Noise Recognition in Native and Nonnative Speakers of English

Authors: Zahra Fotovatnia, Jeffery A. Jones, Alexandra Gottardo

Abstract:

Speech communication often occurs in environments where noise conceals part of a message. Listeners should compensate for the lack of auditory information by picking up distinct acoustic cues and using semantic and sentential context to recreate the speaker’s intended message. This situation seems to be more challenging in a nonnative than native language. On the other hand, early bilinguals are expected to show an advantage over the late bilingual and monolingual speakers of a language due to their better executive functioning components. In this study, English monolingual speakers were compared with early and late nonnative speakers of English to understand speech in noise processing (SIN) and the underlying neurobiological features of this phenomenon. Auditory mismatch negativities (MMNs) were recorded using a double-oddball paradigm in response to a minimal pair that differed in their middle vowel (beat/bit) at Wilfrid Laurier University in Ontario, Canada. The results did not show any significant structural and electroneural differences across groups. However, vocabulary knowledge correlated positively with performance on tests that measured SIN processing in participants who learned English after age 6. Moreover, their performance on the test negatively correlated with the integral area amplitudes in the left superior temporal gyrus (STG). In addition, the STG was engaged before the inferior frontal gyrus (IFG) in noise-free and low-noise test conditions in all groups. We infer that the pre-attentive processing of words engages temporal lobes earlier than the fronto-central areas and that vocabulary knowledge helps the nonnative perception of degraded speech.

Keywords: degraded speech perception, event-related brain potentials, mismatch negativities, brain regions

Procedia PDF Downloads 107
1358 Exploring Twitter Data on Human Rights Activism on Olympics Stage through Social Network Analysis and Mining

Authors: Teklu Urgessa, Joong Seek Lee

Abstract:

Social media is becoming the primary choice of activists to make their voices heard. This fact is coupled by two main reasons. The first reason is the emergence web 2.0, which gave the users opportunity to become content creators than passive recipients. Secondly the control of the mainstream mass media outlets by the governments and individuals with their political and economic interests. This paper aimed at exploring twitter data of network actors talking about the marathon silver medalists on Rio2016, who showed solidarity with the Oromo protesters in Ethiopia on the marathon race finish line when he won silver. The aim is to discover important insight using social network analysis and mining. The hashtag #FeyisaLelisa was used for Twitter network search. The actors’ network was visualized and analyzed. It showed the central influencers during first 10 days in August, were international media outlets while it was changed to individual activist in September. The degree distribution of the network is scale free where the frequency of degrees decay by power low. Text mining was also used to arrive at meaningful themes from tweet corpus about the event selected for analysis. The semantic network indicated important clusters of concepts (15) that provided different insight regarding the why, who, where, how of the situation related to the event. The sentiments of the words in the tweets were also analyzed and indicated that 95% of the opinions in the tweets were either positive or neutral. Overall, the finding showed that Olympic stage protest of the marathoner brought the issue of Oromo protest to the global stage. The new research framework is proposed based for event-based social network analysis and mining based on the practical procedures followed in this research for event-based social media sense making.

Keywords: human rights, Olympics, social media, network analysis, social network ming

Procedia PDF Downloads 256
1357 Gender Recognition with Deep Belief Networks

Authors: Xiaoqi Jia, Qing Zhu, Hao Zhang, Su Yang

Abstract:

A gender recognition system is able to tell the gender of the given person through a few of frontal facial images. An effective gender recognition approach enables to improve the performance of many other applications, including security monitoring, human-computer interaction, image or video retrieval and so on. In this paper, we present an effective method for gender classification task in frontal facial images based on deep belief networks (DBNs), which can pre-train model and improve accuracy a little bit. Our experiments have shown that the pre-training method with DBNs for gender classification task is feasible and achieves a little improvement of accuracy on FERET and CAS-PEAL-R1 facial datasets.

Keywords: gender recognition, beep belief net-works, semi-supervised learning, greedy-layer wise RBMs

Procedia PDF Downloads 451
1356 Efficient DCT Architectures

Authors: Mr. P. Suryaprasad, R. Lalitha

Abstract:

This paper presents an efficient area and delay architectures for the implementation of one dimensional and two dimensional discrete cosine transform (DCT). These are supported to different lengths (4, 8, 16, and 32). DCT blocks are used in the different video coding standards for the image compression. The 2D- DCT calculation is made using the 2D-DCT separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Based on the existing 1D-DCT architecture two different types of 2D-DCT architectures, folded and parallel types are implemented. Both of these two structures use the same transpose buffer. Proposed transpose buffer occupies less area and high speed than existing transpose buffer. Hence the area, low power and delay of both the 2D-DCT architectures are reduced.

Keywords: transposition buffer, video compression, discrete cosine transform, high efficiency video coding, two dimensional picture

Procedia PDF Downloads 519
1355 Enhanced Traffic Light Detection Method Using Geometry Information

Authors: Changhwan Choi, Yongwan Park

Abstract:

In this paper, we propose a method that allows faster and more accurate detection of traffic lights by a vision sensor during driving, DGPS is used to obtain physical location of a traffic light, extract from the image information of the vision sensor only the traffic light area at this location and ascertain if the sign is in operation and determine its form. This method can solve the problem in existing research where low visibility at night or reflection under bright light makes it difficult to recognize the form of traffic light, thus making driving unstable. We compared our success rate of traffic light recognition in day and night road environments. Compared to previous researches, it showed similar performance during the day but 50% improvement at night.

Keywords: traffic light, intelligent vehicle, night, detection, DGPS

Procedia PDF Downloads 323
1354 Customer Segmentation Revisited: The Case of the E-Tailing Industry in Emerging Market

Authors: Sanjeev Prasher, T. Sai Vijay, Chandan Parsad, Abhishek Banerjee, Sahakari Nikhil Krishna, Subham Chatterjee

Abstract:

With rapid rise in internet retailing, the industry is set for a major implosion. Due to the little difference among competitors, companies find it difficult to segment and target the right shoppers. The objective of the study is to segment Indian online shoppers on the basis of the factors – website characteristics and shopping values. Together, these cover extrinsic and intrinsic factors that affect shoppers as they visit web retailers. Data were collected using questionnaire from 319 Indian online shoppers, and factor analysis was used to confirm the factors influencing the shoppers in their selection of web portals. Thereafter, cluster analysis was applied, and different segments of shoppers were identified. The relationship between income groups and online shoppers’ segments was tracked using correspondence analysis. Significant findings from the study include that web entertainment and informativeness together contribute more than fifty percent of the total influence on the web shoppers. Contrary to general perception that shoppers seek utilitarian leverages, the present study highlights the preference for fun, excitement, and entertainment during browsing of the website. Four segments namely Information Seekers, Utility Seekers, Value Seekers and Core Shoppers were identified and profiled. Value seekers emerged to be the most dominant segment with two-fifth of the respondents falling for hedonic as well as utilitarian shopping values. With overlap among the segments, utilitarian shopping value garnered prominence with more than fifty-eight percent of the total respondents. Moreover, a strong relation has been established between the income levels and the segments of Indian online shoppers. Web shoppers show different motives from being utility seekers to information seekers, core shoppers and finally value seekers as income levels increase. Companies can strategically use this information for target marketing and align their web portals accordingly. This study can further be used to develop models revolving around satisfaction, trust and customer loyalty.

Keywords: online shopping, shopping values, effectiveness of information content, web informativeness, web entertainment, information seekers, utility seekers, value seekers, core shoppers

Procedia PDF Downloads 193
1353 The Application of ICT in E-Assessment and E-Learning in Language Learning and Teaching

Authors: Seyyed Hassan Seyyedrezaei

Abstract:

The advent of computer and ICT thereafter has introduced many irrevocable changes in learning and teaching. There is substantially growing need for the use of IT and ICT in language learning and teaching. In other words, the integration of Information Technology (IT) into online teaching is of vital importance for education and assessment. Considering the fact that the image of education is undergone drastic changes by the advent of technology, education systems and teachers move beyond the walls of traditional classes and methods in order to join with other educational centers to revitalize education. Given the advent of distance learning, online courses and virtual universities, e-assessment has taken a prominent place in effective teaching and meeting the learners' educational needs. The purpose of this paper is twofold: first, scrutinizing e-learning, it discusses how and why e-assessment is becoming widely used by educationalists and administrators worldwide. As a second purpose, a couple of effective strategies for online assessment will be enumerated.

Keywords: e-assessment, e learning, ICT, online assessment

Procedia PDF Downloads 567
1352 Enhancing Cognitive and Emotional Well-Being in an 85-Year-Old American-Dominican Veteran through Neuropsychological Intervention and Cognitive Stimulation

Authors: Natividad Natalia Angeles Manuel

Abstract:

In the Dominican Republic, American-Dominican veterans face unique challenges due to their dual identities and wartime experiences. This case study examines an 85-year-old veteran with memory impairments and emotional distress linked to military service. A neuropsychological assessment using standardized tools evaluated cognitive domains and functional abilities. Significant deficits in memory, orientation, semantic memory, and executive functions, alongside symptoms of Post-Traumatic Stress Disorder and depression, were identified. A six-month cognitive stimulation program included tailored interventions to enhance memory, attention, and executive skills through weekly sessions and group activities. Medical and physical therapy support aimed to improve overall cognitive, functional, and emotional outcomes. Follow-up evaluations showed improvements in memory retention, attention, task proficiency, and reduced depressive symptoms, highlighting the program's effectiveness in promoting emotional well-being and quality of life. Despite ongoing memory challenges and military-related nightmares, the veteran responded positively to interventions, demonstrating resilience and motivation. This study emphasizes the importance of personalized neuropsychological interventions for American-Dominican veterans in the Dominican Republic. Through assessment tools and focused cognitive stimulation strategies, healthcare providers can successfully alleviate cognitive and emotional challenges stemming from traumatic experiences in elderly veterans. Overall, integrated neuropsychological assessment and stimulation programs are shown to enhance cognitive resilience and emotional well-being, thus contributing to an enhanced quality of life for aging American-Dominican veterans.

Keywords: neuropsychology, cognitive stimulation, American-Dominican veterans, Dominican Republic, PTSD, memory deficits

Procedia PDF Downloads 35
1351 Real-Time Big-Data Warehouse a Next-Generation Enterprise Data Warehouse and Analysis Framework

Authors: Abbas Raza Ali

Abstract:

Big Data technology is gradually becoming a dire need of large enterprises. These enterprises are generating massively large amount of off-line and streaming data in both structured and unstructured formats on daily basis. It is a challenging task to effectively extract useful insights from the large scale datasets, even though sometimes it becomes a technology constraint to manage transactional data history of more than a few months. This paper presents a framework to efficiently manage massively large and complex datasets. The framework has been tested on a communication service provider producing massively large complex streaming data in binary format. The communication industry is bound by the regulators to manage history of their subscribers’ call records where every action of a subscriber generates a record. Also, managing and analyzing transactional data allows service providers to better understand their customers’ behavior, for example, deep packet inspection requires transactional internet usage data to explain internet usage behaviour of the subscribers. However, current relational database systems limit service providers to only maintain history at semantic level which is aggregated at subscriber level. The framework addresses these challenges by leveraging Big Data technology which optimally manages and allows deep analysis of complex datasets. The framework has been applied to offload existing Intelligent Network Mediation and relational Data Warehouse of the service provider on Big Data. The service provider has 50+ million subscriber-base with yearly growth of 7-10%. The end-to-end process takes not more than 10 minutes which involves binary to ASCII decoding of call detail records, stitching of all the interrogations against a call (transformations) and aggregations of all the call records of a subscriber.

Keywords: big data, communication service providers, enterprise data warehouse, stream computing, Telco IN Mediation

Procedia PDF Downloads 175
1350 Marketing Strategy of Agricultural Products in Remote Districts: A Case Study of Mudan Township, Taiwan

Authors: Ying-Hsiang Ho, Hsiao-Tseng Lin

Abstract:

Mudan Township is a remote mountainous area in Taiwan. In recent years, due to the migration of the population, inconvenient transportation, digital divide, and low production, agricultural products marketing have become a major issue. This research aims to develop the marketing strategy suitable for the agricultural products of the rural areas. The main objective of this work is to conduct in-depth interviews with scholars and experts in the marketing field, combined with the marketing 4P combination, to analyze and summarize the possible marketing strategies for agricultural products for remote districts. The interviews consist of seven experts from industry who have practical experience in producing, marketing, and selling agricultural products and three professors that have experience in teaching marketing management. The in-depth interviews are conducted for about an hour using a pre-drafted interview outline. The results of the interviews are summarized by semantic analysis and presented in a marketing 4P combination. The results indicate that in terms of products, high-quality products with original characteristics can be added through the implementation of production history, organic certification, and cultural packaging. In the place part, we found that the use of emerging communities, the emphasis on cross-industry alliances, the improvement of information application capabilities of rural households, production and marketing group, and contractual farming system are the development priorities. In terms of promotion, it should be an emphasis on the management of internet social media and word-of-mouth marketing. Mudan Township may consider promoting agricultural products through special festivals such as farmer's market, wild ginger flower season and hot spring season. This research also proposes relevant recommendations for the government's public sector and related industry reference for the promotion of agricultural products for remote area.

Keywords: marketing strategy, remote districts, agricultural products, in-depth interviews

Procedia PDF Downloads 125
1349 Flow Visualization in Biological Complex Geometries for Personalized Medicine

Authors: Carlos Escobar-del Pozo, César Ahumada-Monroy, Azael García-Rebolledo, Alberto Brambila-Solórzano, Gregorio Martínez-Sánchez, Luis Ortiz-Rincón

Abstract:

Numerical simulations of flow in complex biological structures have gained considerable attention in the last years. However, the major issue is the validation of the results. The present work shows a Particle Image Velocimetry PIV flow visualization technique in complex biological structures, particularly in intracranial aneurysms. A methodology to reconstruct and generate a transparent model has been developed, as well as visualization and particle tracking techniques. The generated transparent models allow visualizing the flow patterns with a regular camera using the visualization techniques. The final goal is to use visualization as a tool to provide more information on the treatment and surgery decisions in aneurysms.

Keywords: aneurysms, PIV, flow visualization, particle tracking

Procedia PDF Downloads 89
1348 Estimation of Soil Nutrient Content Using Google Earth and Pleiades Satellite Imagery for Small Farms

Authors: Lucas Barbosa Da Silva, Jun Okamoto Jr.

Abstract:

Precision Agriculture has long being benefited from crop fields’ aerial imagery. This important tool has allowed identifying patterns in crop fields, generating useful information to the production management. Reflectance intensity data in different ranges from the electromagnetic spectrum may indicate presence or absence of nutrients in the soil of an area. Different relations between the different light bands may generate even more detailed information. The knowledge of the nutrients content in the soil or in the crop during its growth is a valuable asset to the farmer that seeks to optimize its yield. However, small farmers in Brazil often lack the resources to access this kind information, and, even when they do, it is not presented in a comprehensive and/or objective way. So, the challenges of implementing this technology ranges from the sampling of the imagery, using aerial platforms, building of a mosaic with the images to cover the entire crop field, extracting the reflectance information from it and analyzing its relationship with the parameters of interest, to the display of the results in a manner that the farmer may take the necessary decisions more objectively. In this work, it’s proposed an analysis of soil nutrient contents based on image processing of satellite imagery and comparing its outtakes with commercial laboratory’s chemical analysis. Also, sources of satellite imagery are compared, to assess the feasibility of using Google Earth data in this application, and the impacts of doing so, versus the application of imagery from satellites like Landsat-8 and Pleiades. Furthermore, an algorithm for building mosaics is implemented using Google Earth imagery and finally, the possibility of using unmanned aerial vehicles is analyzed. From the data obtained, some soil parameters are estimated, namely, the content of Potassium, Phosphorus, Boron, Manganese, among others. The suitability of Google Earth Imagery for this application is verified within a reasonable margin, when compared to Pleiades Satellite imagery and to the current commercial model. It is also verified that the mosaic construction method has little or no influence on the estimation results. Variability maps are created over the covered area and the impacts of the image resolution and sample time frame are discussed, allowing easy assessments of the results. The final results show that easy and cheaper remote sensing and analysis methods are possible and feasible alternatives for the small farmer, with little access to technological and/or financial resources, to make more accurate decisions about soil nutrient management.

Keywords: remote sensing, precision agriculture, mosaic, soil, nutrient content, satellite imagery, aerial imagery

Procedia PDF Downloads 173