Search results for: Disclosure of Personal Information.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4231

Search results for: Disclosure of Personal Information.

541 Celebrity Endorsement: How It Works When a Celebrity Fits the Brand and Advertisement

Authors: Göksel Şimşek

Abstract:

Celebrities are admired, appreciated and imitated all over the world. As a natural result of this, today many brands choose to work with celebrities for their advertisements. It can be said that the more the brands include celebrities in their marketing communication strategies, the tougher the competition in this field becomes and they allocate a large portion of their marketing budget to this. Brands invest in celebrities who will represent them in order to build the image they want to create.

This study aimed to bring under spotlight the perceptions of Turkish customers regarding the use of celebrities in advertisements and marketing communication and try to understand their possible effects on subsequent purchasing decisions. In addition, consumers’ reactions and perceptions were investigated in the context of the product-celebrity match, to what extent the celebrity conforms to the concept of the advertisement and the celebrity-target audience match.

In  order  to  achieve  this  purpose, a  quantitative research  was conducted  as a case  study concerning  Mavi Jeans  (textile company). Information was obtained through survey. The results from this case study are supported by relevant theories concerning the main subject. The most valuable result would be that instead of creating an advertisement around a celebrity in demand at the time, using a celebrity that fits the concept of the advertisement and feeds the concept rather than replaces it, that is celebrity endorsement, will lead to more striking and positive results.

Keywords: Celebrity endorsement, product-celebrity match, advertising.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6320
540 Discrete and Stationary Adaptive Sub-Band Threshold Method for Improving Image Resolution

Authors: P. Joyce Beryl Princess, Y. Harold Robinson

Abstract:

Image Processing is a structure of Signal Processing for which the input is the image and the output is also an image or parameter of the image. Image Resolution has been frequently referred as an important aspect of an image. In Image Resolution Enhancement, images are being processed in order to obtain more enhanced resolution. To generate highly resoluted image for a low resoluted input image with high PSNR value. Stationary Wavelet Transform is used for Edge Detection and minimize the loss occurs during Downsampling. Inverse Discrete Wavelet Transform is to get highly resoluted image. Highly resoluted output is generated from the Low resolution input with high quality. Noisy input will generate output with low PSNR value. So Noisy resolution enhancement technique has been used for adaptive sub-band thresholding is used. Downsampling in each of the DWT subbands causes information loss in the respective subbands. SWT is employed to minimize this loss. Inverse Discrete wavelet transform (IDWT) is to convert the object which is downsampled using DWT into a highly resoluted object. Used Image denoising and resolution enhancement techniques will generate image with high PSNR value. Our Proposed method will improve Image Resolution and reached the optimized threshold.

Keywords: Image Processing, Inverse Discrete wavelet transform, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
539 Validation of Automotive Centrals Using Hardware in the Loop-Body Control Unit and Lights

Authors: Marley Rosa Luciano, Rodney Rezende Saldanha

Abstract:

The race for electrification and the need for innovation to attract customers has led the automotive industry to do something different with vehicles. New emissions control challenges and efficient technological availability are the pillars of creation. The growing demand to upgrade industrial manufacturing systems creates actions that directly impact vehicle production. With this comes the search for new prototyping methods and virtual tools for component testing and validation, and vehicle systems have established themselves. The demand for Electronic Control Units (ECU) is increasing due to the availability of intelligence and safety in today's vehicles, directly affecting their development, performance, and functional testing. In order to keep up with global changes, the automotive industry uses different virtual environments to produce, verify and validate their vehicles and test prototypes used during development. Therefore, in this paper, integration and validation were performed using the Hardware in the Loop (HIL) test platform, focusing on the ECU Body Control Module (BCM). Then, a brief commentary reviews other test medium platforms, such as the Plywood Buck (PWB), and examines the reliability, flexibility, installation time, and cost of the three test platforms, software in the loop (SIL), Model in the loop (MIL), and HIL, to review their benefits, challenges, and issues in use and information to optimize the use of each platform and test medium.

Keywords: Automotive, Electronic Central Unit, xIL, Hardware in the loop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 337
538 ADA Tool for Satellite InSAR-Based Ground Displacement Analysis: The Granada Region

Authors: M. Cuevas-González, O. Monserrat, A. Barra, C. Reyes-Carmona, R. M. Mateos, J. P. Galve, R. Sarro, M. Cantalejo, E. Peña, M. Martínez-Corbella, J. A. Luque, J. M. Azañón, A. Millares, M. Béjar, J. A. Navarro, L. Solari

Abstract:

Geohazard prone areas require continuous monitoring to detect risks, understand the phenomena occurring in those regions and prevent disasters. Satellite interferometry (InSAR) has come to be a trustworthy technique for ground movement detection and monitoring in the last few years. InSAR based techniques allow to process large areas providing high number of displacement measurements at low cost. However, the results provided by such techniques are usually not easy to interpret by non-experienced users hampering its use for decision makers. This work presents a set of tools developed in the framework of different projects (Momit, Safety, U-Geohaz, Riskcoast) and an example of their use in the Granada Coastal area (Spain) is shown. The ADA (Active Displacement Areas) tool has been developed with the aim of easing the management, use and interpretation of InSAR based results. It provides a semi-automatic extraction of the most significant ADAs through the application ADAFinder tool. This tool aims to support the exploitation of the European Ground Motion Service (EU-GMS), which will offer reliable and systematic information on natural and anthropogenic ground motion phenomena across Europe.

Keywords: Ground displacements, InSAR, natural hazards, satellite imagery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 416
537 Proposal of Optimality Evaluation for Quantum Secure Communication Protocols by Taking the Average of the Main Protocol Parameters: Efficiency, Security and Practicality

Authors: Georgi Bebrov, Rozalina Dimova

Abstract:

In the field of quantum secure communication, there is no evaluation that characterizes quantum secure communication (QSC) protocols in a complete, general manner. The current paper addresses the problem concerning the lack of such an evaluation for QSC protocols by introducing an optimality evaluation, which is expressed as the average over the three main parameters of QSC protocols: efficiency, security, and practicality. For the efficiency evaluation, the common expression of this parameter is used, which incorporates all the classical and quantum resources (bits and qubits) utilized for transferring a certain amount of information (bits) in a secure manner. By using criteria approach whether or not certain criteria are met, an expression for the practicality evaluation is presented, which accounts for the complexity of the QSC practical realization. Based on the error rates that the common quantum attacks (Measurement and resend, Intercept and resend, probe attack, and entanglement swapping attack) induce, the security evaluation for a QSC protocol is proposed as the minimum function taken over the error rates of the mentioned quantum attacks. For the sake of clarity, an example is presented in order to show how the optimality is calculated.

Keywords: Quantum cryptography, quantum secure communcation, quantum secure direct communcation security, quantum secure direct communcation efficiency, quantum secure direct communcation practicality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 980
536 The Effects of Local Factors on the Concentrations and Flora of Viable Fungi in School Buildings

Authors: H. Salonen, E. Castagnoli, C. Vornanen-Winqvist, R. Mikkola, C. Duchaine, L. Morawska, J. Kurnitski

Abstract:

A wide range of health effects among occupants are associated with the exposure to bioaerosols from fungal sources. Although the accurate role of these aerosols in causing the symptoms and diseases is poorly understood, the important effect of bioaerosol exposure on human health is well recognized. Thus, there is a need to determine all of the contributing factors related to the concentration of fungi in indoor air. In this study, we reviewed and summarized the different factors affecting the concentrations of viable fungi in school buildings. The literature research was conducted using Pubmed and Google Scholar. In addition, we searched the lists of references of selected articles. According to the literature, the main factors influencing the concentration of viable fungi in the school buildings are moisture damage in building structures, the season (temperature and humidity conditions), the type and rate of ventilation, the number and activities of occupants and diurnal variations. This study offers valuable information that can be used in the interpretation of the fungal analysis and to decrease microbial exposure by reducing known sources and/or contributing factors. However, more studies of different local factors contributing to the human microbial exposure in school buildings—as well as other type of buildings and different indoor environments—are needed.

Keywords: Fungi, concentration, indoor, school, contributing factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1039
535 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network

Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon

Abstract:

In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are class balancing, data shuffling, and standardization, were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the sequential model and ReLU activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.

Keywords: Spectroscopy, soluble solid content, pineapple, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134
534 A SWOT Analysis on Institutional Environments of University of the Punjab

Authors: Saghir Ahmad, Abid Hussain Ch., Atif Khalil, Misbah Malik

Abstract:

The major purpose of the study was to identify the institutional environments’ strengths, weaknesses, opportunities and threats of University of the Punjab, Lahore. The target population of the study was teachers of University of the Punjab Lahore. The sample of 235 teachers (155 males, 80 females) were selected through multistage stratified sampling technique. A questionnaire regarding the institutional environments of University SWOT Analysis “Strengths, Weaknesses, Opportunities, and Threats” was used to collect the required data for this study. The questionnaire consisted of two parts. The first part comprised of the demographic information (faculty, department, gender, teacher rank), while the second part included the statements regarding SWOT analysis (strengths, weaknesses, opportunities and threats). Reliability index (Cronbach’s Alpha) of the questionnaire was 0.87, which is statistically acceptable. Analysis of the data indicated that there was significant difference in the opinion of respondents. Teachers of Islamic studies and Laws had difference in their opinions regarding the institutional environment strengths, and opportunities and it was supported by the findings of the study. There was significant difference in opinions of male and female teachers regarding strengths and opportunities of university. And there was no significant difference in opinions of male and female teachers regarding weaknesses and threats of university.

Keywords: Institutional environments, SWOT analysis, teachers, University of the Punjab.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2022
533 Renewable Energy Industry Trends and Its Contributions to the Development of Energy Resilience in an Era of Accelerating Climate Change

Authors: A. T. Asutosh, J. Woo, M. Kouhirostami, M. Sam, A. Khantawang, C. Cuales, W. Ryor, C. Kibert

Abstract:

Climate change and global warming vortex have grown to alarming proportions. Therefore, the need for a shift in the conceptualization of energy production is paramount. Energy practices have been created in the current situation. Fossil fuels continue their prominence, at the expense of renewable sources. Despite this abundance, a large percentage of the world population still has no access to electricity but there have been encouraging signs in global movement from nonrenewable to renewable energy but means to reverse climate change have been elusive. Worldwide, organizations have put tremendous effort into innovation. Conferences and exhibitions act as a platform that allows a broad exchange of information regarding trends in the renewable energy field. The Solar Power International (SPI) conference and exhibition is a gathering of concerned activists, and probably the largest convention of its kind. This study investigates current development in the renewable energy field, analyzing means by which industry is being applied to the issue. In reviewing the 2019 SPI conference, it was found innovations in recycling and assessing the environmental impacts of the solar products that need critical attention. There is a huge movement in the electrical storage but there exists a large gap in the development of security systems. This research will focus on solar energy, but impacts will be relevant to the entire renewable energy market.

Keywords: Climate change, renewable energy, solar, trends, research, SPI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1166
532 Impact of Flexibility on Patient Satisfaction and Behavioral Intention: A Critical Reassessment and Model Development

Authors: Pradeep Kumar, Shibashish Chakraborty, Sasadhar Bera

Abstract:

In the anticipation of demand fluctuations, services cannot be inventoried and hence it creates a difficult problem in marketing of services. The inability to meet customers (patients) requirements in healthcare context has more serious consequences than other service sectors. In order to meet patient requirements in the current uncertain environment, healthcare organizations are seeking ways for improved service delivery. Flexibility provides a mechanism for reducing variability in service encounters and improved performance. Flexibility is defined as the ability of the organization to cope with changing circumstances or instability caused by the environment. Patient satisfaction is an important performance outcome of healthcare organizations. However, the paucity of information exists in healthcare delivery context to examine the impact of flexibility on patient satisfaction and behavioral intention. The present study is an attempt to develop a conceptual foundation for investigating overall impact of flexibility on patient satisfaction and behavioral intention. Several dimensions of flexibility in healthcare context are examined and proposed to have a significant impact on patient satisfaction and intention. Furthermore, the study involves a critical examination of determinants of patient satisfaction and development of a comprehensive view the relationship between flexibility, patient satisfaction and behavioral intention. Finally, theoretical contributions and implications for healthcare professionals are suggested from flexibility perspective.

Keywords: Healthcare, flexibility, patient satisfaction, behavioral intention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585
531 Numerical Simulation of Inviscid Transient Flows in Shock Tube and its Validations

Authors: Al-Falahi Amir, Yusoff M. Z, Yusaf T

Abstract:

The aim of this paper is to develop a new two dimensional time accurate Euler solver for shock tube applications. The solver was developed to study the performance of a newly built short-duration hypersonic test facility at Universiti Tenaga Nasional “UNITEN" in Malaysia. The facility has been designed, built, and commissioned for different values of diaphragm pressure ratios in order to get wide range of Mach number. The developed solver uses second order accurate cell-vertex finite volume spatial discretization and forth order accurate Runge-Kutta temporal integration and it is designed to simulate the flow process for similar driver/driven gases (e.g. air-air as working fluids). The solver is validated against analytical solution and experimental measurements in the high speed flow test facility. Further investigations were made on the flow process inside the shock tube by using the solver. The shock wave motion, reflection and interaction were investigated and their influence on the performance of the shock tube was determined. The results provide very good estimates for both shock speed and shock pressure obtained after diaphragm rupture. Also detailed information on the gasdynamic processes over the full length of the facility is available. The agreements obtained have been reasonable.

Keywords: shock tunnel, shock tube, shock wave, CFD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2755
530 Effect of Twin Cavities on the Axially Loaded Pile in Clay

Authors: Ali A. Al-Jazaairry, Tahsin T. Sabbagh

Abstract:

Presence of cavities in soil predictably induces ground deformation and changes in soil stress, which might influence adjacent existing pile foundations, though the effect of twin cavities on a nearby pile needs to be understood. This research is an attempt to identify the behaviour of piles subjected to axial load and embedded in cavitied clayey soil. A series of finite element modelling were conducted to investigate the performance of piled foundation located in such soils. The validity of the numerical simulation was evaluated by comparing it with available field test and alternative analytical model. The study involved many parameters such as twin cavities size, depth, spacing between cavities, and eccentricity of cavities from the pile axis on the pile performance subjected to axial load. The study involved many cases; in each case, a critical value has been found in which cavities’ presence has shown minimum impact on the behaviour of pile. Load-displacement relationships of the affecting parameters on the pile behaviour were presented to provide helpful information for designing piled foundation situated near twin underground cavities. It was concluded that the presence of the cavities within the soil mass reduces the ultimate capacity of pile. This reduction differs according to the size and location of the cavity.

Keywords: Axial load, clay, finite element, pile, twin cavities, ultimate capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257
529 Influence of Atmospheric Physical Effects on Static Behavior of Building Plate Components Made of Fiber-Cement-Based Materials

Authors: Jindrich J. Melcher, Marcela Karmazínová

Abstract:

The paper presents the brief information on particular results of experimental study focused to the problems of behavior of structural plated components made of fiber-cement-based materials and used in building constructions, exposed to atmospheric physical effects given by the weather changes in the summer period. Weather changes represented namely by temperature and rain cause also the changes of the temperature and moisture of the investigated structural components. This can affect their static behavior that means stresses and deformations, which have been monitored as the main outputs of tests performed. Experimental verification is based on the simulation of the influence of temperature and rain using the defined procedure of warming and water sprinkling with respect to the corresponding weather conditions during summer period in the South Moravian region at the Czech Republic, for which the application of these structural components is mainly planned. Two types of components have been tested: (i) glass-fiber-concrete panels used for building façades and (ii) fiber-cement slabs used mainly for claddings, but also as a part of floor structures or lost shuttering, and so on.

Keywords: Atmospheric physical effect, building component, experiment, fiber-cement, glass-fiber-concrete, simulation, static behavior, test, warming, water sprinkling, weather.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1251
528 Face Recognition Using Double Dimension Reduction

Authors: M. A Anjum, M. Y. Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

Keywords: Biometrics, DCT, Face Recognition, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495
527 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: Concrete bridges, deterioration, Markov chains, probability matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1444
526 The Design and Applied of Learning Management System via Social Media on Internet: Case Study of Operating System for Business Subject

Authors: Pimploi Tirastittam, Sawanath Treesathon, Amornrath Ongkawat

Abstract:

Learning Management System (LMS) is the system which uses to manage the learning in order to grouping the content and learning activity between the lecturer and learner including online examination and evaluation. Nowadays, it is the borderless learning era so the learning activities can be accessed from everywhere in the world and also anytime via the information technology and media. The learner can easily access to the knowledge so the different in time and distance is not a constraint for learning anymore. The learning pattern which was used in this research is the integration of the in-class learning and online learning via internet and will be able to monitor the progress by the Learning management system which will create the fast response and accessible learning process via the social media. In order to increase the capability and freedom of the learner, the system can show the current and history of the learning document, video conference and also has the chat room for the learner and lecturer to interact to each other. So the objectives of the “The Design and Applied of Learning Management System via Social Media on Internet: Case Study of Operating System for Business Subject” are to expand the opportunity of learning and to increase the efficiency of learning as well as increase the communication channel between lecturer and student. The data of this research was collect from 30 users of the system which are students who enroll in the subject. And the result of the research is in the “Very Good” which is conformed to the hypothesis.

Keywords: Learning Management System, Social Media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
525 Technology Based Learning Environment and Student Achievement in English as a Foreign Language in Pakistan

Authors: M. Athar Hussain, M. Zafar Iqbal., M. Saeed Akhtar

Abstract:

The fast growing accessibility and capability of emerging technologies have fashioned enormous possibilities of designing, developing and implementing innovative teaching methods in the classroom. The global technological scenario has paved the way to new pedagogies in teaching-learning process focusing on technology based learning environment and its impact on student achievement. The present experimental study was conducted to determine the effectiveness of technology based learning environment on student achievement in English as a foreign language. The sample of the study was 90 students of 10th grade of a public school located in Islamabad. A pretest- posttest equivalent group design was used to compare the achievement of the two groups. A Pretest and A posttest containing 50 items each from English textbook were developed and administered. The collected data were statistically analyzed. The results showed that there was a significant difference between the mean scores of Experimental group and the Control group. The performance of Experimental group was better on posttest scores that indicted that teaching through technology based learning environment enhanced the achievement level of the students. On the basis of the results, it was recommended that teaching and learning through information and communication technologies may be adopted to enhance the language learning capability of the students.

Keywords: English as a Foreign Language, Student Achievement, Technology Based Learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3140
524 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation

Authors: Ke He, Wumaier Parezhati, Haruka Yamashita

Abstract:

Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.

Keywords: Doc2Vec, marketing, online marketplace, recommendation system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 469
523 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: J. Madureira, R. Lagido, I. Sousa

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU. 

Keywords: Inertial Measurement Unit (IMU), Global Positioning System (GPS), smartphone, surfing performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659
522 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: Video tracking, particle filter, greedy snake, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1194
521 Sewer Culvert Installation Method to Accommodate Underground Construction in an Urban Area with Narrow Streets (The Development of Shield Switching Type Micro-Tunneling Method and the Introduction of Construction Examples)

Authors: Osamu Igawa, Hiroshi Kouchiwa, Yuji Ito

Abstract:

In recent years, a reconstruction project for sewer  pipelines has been progressing in Japan with the aim of renewing old  sewer culverts. However, it is difficult to secure a sufficient base area  for shafts in an urban area because many streets are narrow with a  complex layout. As a result, construction in such urban areas is  generally very demanding.  In urban areas, there is a strong requirement for a safe, reliable and  economical construction method that does not disturb the public’s  daily life and urban activities. With this in mind, we developed a new  construction method called the “shield switching type micro-tunneling  method,” which integrates the micro-tunneling method and shield  method.  In this method, pipeline is constructed first for sections that are  gently curved or straight using the economical micro-tunneling  method, and then the method is switched to the shield method for  sections with a sharp curve or a series of curves without establishing  an intermediate shaft.  This paper provides the information, features and construction  examples of this newly developed method.

 

Keywords: Micro-tunneling method, Secondary lining applied RC segment, Sharp curve, Shield method, Switching type.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2144
520 Combating Money Laundering in the Banking Industry: Malaysian Experience

Authors: Aspalella A. Rahman

Abstract:

Money laundering has been described by many as the lifeblood of crime and is a major threat to the economic and social well-being of societies. It has been recognized that the banking system has long been the central element of money laundering. This is in part due to the complexity and confidentiality of the banking system itself. It is generally accepted that effective anti-money laundering (AML) measures adopted by banks will make it tougher for criminals to get their "dirty money" into the financial system. In fact, for law enforcement agencies, banks are considered to be an important source of valuable information for the detection of money laundering. However, from the banks- perspective, the main reason for their existence is to make as much profits as possible. Hence their cultural and commercial interests are totally distinct from that of the law enforcement authorities. Undoubtedly, AML laws create a major dilemma for banks as they produce a significant shift in the way banks interact with their customers. Furthermore, the implementation of the laws not only creates significant compliance problems for banks, but also has the potential to adversely affect the operations of banks. As such, it is legitimate to ask whether these laws are effective in preventing money launderers from using banks, or whether they simply put an unreasonable burden on banks and their customers. This paper attempts to address these issues and analyze them against the background of the Malaysian AML laws. It must be said that effective coordination between AML regulator and the banking industry is vital to minimize problems faced by the banks and thereby to ensure effective implementation of the laws in combating money laundering.

Keywords: Banking Industry, Bank Negara Money, Laundering, Malaysia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4290
519 The Risk Assessment of Nano-particles and Investigation of Their Environmental Impact

Authors: Nader Nabhani, Amir Tofighi

Abstract:

Nanotechnology is the science of creating, using and manipulating objects which have at least one dimension in range of 0.1 to 100 nanometers. In other words, nanotechnology is reconstructing a substance using its individual atoms and arranging them in a way that is desirable for our purpose. The main reason that nanotechnology has been attracting attentions is the unique properties that objects show when they are formed at nano-scale. These differing characteristics that nano-scale materials show compared to their nature-existing form is both useful in creating high quality products and dangerous when being in contact with body or spread in environment. In order to control and lower the risk of such nano-scale particles, the main following three topics should be considered: 1) First of all, these materials would cause long term diseases that may show their effects on body years after being penetrated in human organs and since this science has become recently developed in industrial scale not enough information is available about their hazards on body. 2) The second is that these particles can easily spread out in environment and remain in air, soil or water for very long time, besides their high ability to penetrate body skin and causing new kinds of diseases. 3) The third one is that to protect body and environment against the danger of these particles, the protective barriers must be finer than these small objects and such defenses are hard to accomplish. This paper will review, discuss and assess the risks that human and environment face as this new science develops at a high rate.

Keywords: Nanotechnology, risk assessment, environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
518 Image Classification and Accuracy Assessment Using the Confusion Matrix, Contingency Matrix, and Kappa Coefficient

Authors: F. F. Howard, C. B. Boye, I. Yakubu, J. S. Y. Kuma

Abstract:

One of the ways that could be used for the production of land use and land cover maps by a procedure known as image classification is the use of the remote sensing technique. Numerous elements ought to be taken into consideration, including the availability of highly satisfactory Landsat imagery, secondary data and a precise classification process. The goal of this study was to classify and map the land use and land cover of the study area using remote sensing and Geospatial Information System (GIS) analysis. The classification was done using Landsat 8 satellite images acquired in December 2020 covering the study area. The Landsat image was downloaded from the USGS. The Landsat image with 30 m resolution was geo-referenced to the WGS_84 datum and Universal Transverse Mercator (UTM) Zone 30N coordinate projection system. A radiometric correction was applied to the image to reduce the noise in the image. This study consists of two sections: the Land Use/Land Cover (LULC) and Accuracy Assessments using the confusion and contingency matrix and the Kappa coefficient. The LULC classifications were vegetation (agriculture) (67.87%), water bodies (0.01%), mining areas (5.24%), forest (26.02%), and settlement (0.88%). The overall accuracy of 97.87% and the kappa coefficient (K) of 97.3% were obtained for the confusion matrix. While an overall accuracy of 95.7% and a Kappa coefficient of 0.947 were obtained for the contingency matrix, the kappa coefficients were rated as substantial; hence, the classified image is fit for further research.

Keywords: Confusion Matrix, contingency matrix, kappa coefficient, land used/ land cover, accuracy assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 271
517 Evaluation of Aquifer Protective Capacity and Soil Corrosivity Using Geoelectrical Method

Authors: M. T. Tsepav, Y. Adamu, M. A. Umar

Abstract:

A geoelectric survey was carried out in some parts of Angwan Gwari, an outskirt of Lapai Local Government Area on Niger State which belongs to the Nigerian Basement Complex, with the aim of evaluating the soil corrosivity, aquifer transmissivity and protective capacity of the area from which aquifer characterisation was made. The G41 Resistivity Meter was employed to obtain fifteen Schlumberger Vertical Electrical Sounding data along profiles in a square grid network. The data were processed using interpex 1-D sounding inversion software, which gives vertical electrical sounding curves with layered model comprising of the apparent resistivities, overburden thicknesses, and depth. This information was used to evaluate longitudinal conductance and transmissivities of the layers. The results show generally low resistivities across the survey area and an average longitudinal conductance variation from 0.0237Siemens in VES 6 to 0.1261Siemens in VES 15 with almost the entire area giving values less than 1.0 Siemens. The average transmissivity values range from 96.45 Ω.m2 in VES 4 to 299070 Ω.m2 in VES 1. All but VES 4 and VES14 had an average overburden greater than 400 Ω.m2, these results suggest that the aquifers are highly permeable to fluid movement within, leading to the possibility of enhanced migration and circulation of contaminants in the groundwater system and that the area is generally corrosive.

Keywords: Geoelectric survey, corrosivity, protective capacity, transmissivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2244
516 Alumina Supported Cu-Mn-La Catalysts for CO and VOCs Oxidation

Authors: Elitsa N. Kolentsova, Dimitar Y. Dimitrov, Petya Cv. Petrova, Georgi V. Avdeev, Diana D. Nihtianova, Krasimir I. Ivanov, Tatyana T. Tabakova

Abstract:

Recently, copper and manganese-containing systems are recognized as active and selective catalysts in many oxidation reactions. The main idea of this study is to obtain more information about γ-Al2O3 supported Cu-La catalysts and to evaluate their activity to simultaneous oxidation of CO, CH3OH and dimethyl ether (DME). The catalysts were synthesized by impregnation of support with a mixed aqueous solution of nitrates of copper, manganese and lanthanum under different conditions. XRD, HRTEM/EDS, TPR and thermal analysis were performed to investigate catalysts’ bulk and surface properties. The texture characteristics were determined by Quantachrome Instruments NOVA 1200e specific surface area and pore analyzer. The catalytic measurements of single compounds oxidation were carried out on continuous flow equipment with a four-channel isothermal stainless steel reactor in a wide temperature range. On the basis of XRD analysis and HRTEM/EDS, it was concluded that the active component of the mixed Cu-Mn-La/γ–alumina catalysts strongly depends on the Cu/Mn molar ratio and consisted of at least four compounds – CuO, La2O3, MnO2 and Cu1.5Mn1.5O4. A homogeneous distribution of the active component on the carrier surface was found. The chemical composition strongly influenced catalytic properties. This influence was quite variable with regards to the different processes.

Keywords: Supported copper-manganese-lanthanum catalysts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1216
515 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm

Authors: Xiang Jianhong, Wang Cong, Wang Linyu

Abstract:

With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.

Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 523
514 Diagnosing Dangerous Arrhythmia of Patients by Automatic Detecting of QRS Complexes in ECG

Authors: Jia-Rong Yeh, Ai-Hsien Li, Jiann-Shing Shieh, Yen-An Su, Chi-Yu Yang

Abstract:

In this paper, an automatic detecting algorithm for QRS complex detecting was applied for analyzing ECG recordings and five criteria for dangerous arrhythmia diagnosing are applied for a protocol type of automatic arrhythmia diagnosing system. The automatic detecting algorithm applied in this paper detected the distribution of QRS complexes in ECG recordings and related information, such as heart rate and RR interval. In this investigation, twenty sampled ECG recordings of patients with different pathologic conditions were collected for off-line analysis. A combinative application of four digital filters for bettering ECG signals and promoting detecting rate for QRS complex was proposed as pre-processing. Both of hardware filters and digital filters were applied to eliminate different types of noises mixed with ECG recordings. Then, an automatic detecting algorithm of QRS complex was applied for verifying the distribution of QRS complex. Finally, the quantitative clinic criteria for diagnosing arrhythmia were programmed in a practical application for automatic arrhythmia diagnosing as a post-processor. The results of diagnoses by automatic dangerous arrhythmia diagnosing were compared with the results of off-line diagnoses by experienced clinic physicians. The results of comparison showed the application of automatic dangerous arrhythmia diagnosis performed a matching rate of 95% compared with an experienced physician-s diagnoses.

Keywords: Signal processing, electrocardiography (ECG), QRS complex, arrhythmia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
513 Night-Time Traffic Light Detection Based On SVM with Geometric Moment Features

Authors: Hyun-Koo Kim, Young-Nam Shin, Sa-gong Kuk, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights detection method at the night-time. First, candidate blobs of traffic lights are extracted from RGB color image. Input image is represented on the dominant color domain by using color transform proposed by Ruta, then red and green color dominant regions are selected as candidates. After candidate blob selection, we carry out shape filter for noise reduction using information of blobs such as length, area, area of boundary box, etc. A multi-class classifier based on SVM (Support Vector Machine) applies into the candidates. Three kinds of features are used. We use basic features such as blob width, height, center coordinate, area, area of blob. Bright based stochastic features are also used. In particular, geometric based moment-s values between candidate region and adjacent region are proposed and used to improve the detection performance. The proposed system is implemented on Intel Core CPU with 2.80 GHz and 4 GB RAM and tested with the urban and rural road videos. Through the test, we show that the proposed method using PF, BMF, and GMF reaches up to 93 % of detection rate with computation time of in average 15 ms/frame.

Keywords: Night-time traffic light detection, multi-class classification, driving assistance system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3890
512 Numerical Study on CO2 Pollution in an Ignition Chamber by Oxygen Enrichment

Authors: Zohreh Orshesh

Abstract:

In this study, a 3D combustion chamber was simulated using FLUENT 6.32. Aims to obtain accurate information about the profile of the combustion in the furnace and also check the effect of oxygen enrichment on the combustion process. Oxygen enrichment is an effective way to reduce combustion pollutant. The flow rate of air to fuel ratio is varied as 1.3, 3.2 and 5.1 and the oxygen enriched flow rates are 28, 54 and 68 lit/min. Combustion simulations typically involve the solution of the turbulent flows with heat transfer, species transport and chemical reactions. It is common to use the Reynolds-averaged form of the governing equation in conjunction with a suitable turbulence model. The 3D Reynolds Averaged Navier Stokes (RANS) equations with standard k-ε turbulence model are solved together by Fluent 6.3 software. First order upwind scheme is used to model governing equations and the SIMPLE algorithm is used as pressure velocity coupling. Species mass fractions at the wall are assumed to have zero normal gradients.Results show that minimum mole fraction of CO2 happens when the flow rate ratio of air to fuel is 5.1. Additionally, in a fixed oxygen enrichment condition, increasing the air to fuel ratio will increase the temperature peak. As a result, oxygen-enrichment can reduce the CO2 emission at this kind of furnace in high air to fuel rates.

Keywords: Combustion chamber, Oxygen enrichment, Reynolds Averaged Navier- Stokes, CO2 emission

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537