Search results for: Data Aggregation
5122 Internal Force State Recognition of Jiujiang Bridge Based on Cable Force-displacement Relationship
Authors: Weifeng Wang, Guoqing Huang, Xianwei Zeng
Abstract:
The nearly 21-year-old Jiujiang Bridge, which is suffering from uneven line shape, constant great downwarping of the main beam and cracking of the box girder, needs reinforcement and cable adjustment. It has undergone cable adjustment for twice with incomplete data. Therefore, the initial internal force state of the Jiujiang Bridge is identified as the key for the cable adjustment project. Based on parameter identification by means of static force test data, this paper suggests determining the initial internal force state of the cable-stayed bridge according to the cable force-displacement relationship parameter identification method. That is, upon measuring the displacement and the change in cable forces for twice, one can identify the parameters concerned by means of optimization. This method is applied to the cable adjustment, replacement and reinforcement project for the Jiujiang Bridge as a guidance for the cable adjustment and reinforcement project of the bridge.
Keywords: Cable-stayed bridge, cable force-displacement, parameter identification, internal force state
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15485121 The Mechanistic Deconvolutive Image Sensor Model for an Arbitrary Pan–Tilt Plane of View
Authors: S. H. Lim, T. Furukawa
Abstract:
This paper presents a generalized form of the mechanistic deconvolution technique (GMD) to modeling image sensors applicable in various pan–tilt planes of view. The mechanistic deconvolution technique (UMD) is modified with the given angles of a pan–tilt plane of view to formulate constraint parameters and characterize distortion effects, and thereby, determine the corrected image data. This, as a result, does not require experimental setup or calibration. Due to the mechanistic nature of the sensor model, the necessity for the sensor image plane to be orthogonal to its z-axis is eliminated, and it reduces the dependency on image data. An experiment was constructed to evaluate the accuracy of a model created by GMD and its insensitivity to changes in sensor properties and in pan and tilt angles. This was compared with a pre-calibrated model and a model created by UMD using two sensors with different specifications. It achieved similar accuracy with one-seventh the number of iterations and attained lower mean error by a factor of 2.4 when compared to the pre-calibrated and UMD model respectively. The model has also shown itself to be robust and, in comparison to pre-calibrated and UMD model, improved the accuracy significantly.Keywords: Image sensor modeling, mechanistic deconvolution, calibration, lens distortion
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15335120 The Role of Gender and Age on Students- Perceptions towards Online Education Case Study: Sakarya University, Vocational High School
Authors: Fahme Dabaj, Havva Başak
Abstract:
The aim of this study is to find out and analyze the role of gender and age on the perceptions of students to the distant online program offered by Vocational High School in Sakarya University. The research is based on a questionnaire as a mean of data collection method to find out the role of age and gender on the student-s perceptions toward online education, and the study progressed through finding relationships between the variables used in the data collection instrument. The findings of the analysis revealed that although the students registered to the online program by will, they preferred the traditional face-to-face education due to the difficulty of the nonverbal communication, their incompetence of using the technology required, and their belief in traditional face-toface learning more than online education. Regarding gender, the results showed that the female students have a better perception of the online education as opposed to the male students. Regarding age, the results showed that the older the students are the more is their preference towards attending face-toface classes.Keywords: Distance education, online education, interneteducation, student perceptions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18845119 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients Cohorts: A Case Study in Scotland
Authors: Sotirios Raptis
Abstract:
Health and Social care (HSc) services planning and scheduling are facing unprecedented challenges, due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven approaches can help to improve policies, plan and design services provision schedules using algorithms that assist healthcare managers to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as Classification and Regression Trees (CART), Random Forests (RF), and Logistic Regression (LGR). The significance tests Chi-Squared and Student’s test are used on data over a 39 years span for which data exist for services delivered in Scotland. The demands are associated using probabilities and are parts of statistical hypotheses. These hypotheses, as their NULL part, assume that the target demand is statistically dependent on other services’ demands. This linking is checked using the data. In addition, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus, groups of services. Statistical tests confirmed ML coupling and made the prediction statistically meaningful and proved that a target service can be matched reliably to other services while ML showed that such marked relationships can also be linear ones. Zero padding was used for missing years records and illustrated better such relationships both for limited years and for the entire span offering long-term data visualizations while limited years periods explained how well patients numbers can be related in short periods of time or that they can change over time as opposed to behaviours across more years. The prediction performance of the associations were measured using metrics such as Receiver Operating Characteristic (ROC), Area Under Curve (AUC) and Accuracy (ACC) as well as the statistical tests Chi-Squared and Student. Co-plots and comparison tables for the RF, CART, and LGR methods as well as the p-value from tests and Information Exchange (IE/MIE) measures are provided showing the relative performance of ML methods and of the statistical tests as well as the behaviour using different learning ratios. The impact of k-neighbours classification (k-NN), Cross-Correlation (CC) and C-Means (CM) first groupings was also studied over limited years and for the entire span. It was found that CART was generally behind RF and LGR but in some interesting cases, LGR reached an AUC = 0 falling below CART, while the ACC was as high as 0.912 showing that ML methods can be confused by zero-padding or by data’s irregularities or by the outliers. On average, 3 linear predictors were sufficient, LGR was found competing well RF and CART followed with the same performance at higher learning ratios. Services were packed only when a significance level (p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, low birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited duration, across various services sectors, learning configurations, as confirmed by using statistical hypotheses.
Keywords: Class, cohorts, data frames, grouping, prediction, probabilities, services.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4655118 The Effect of Ownership Structure on Stock Prices after Crisis: A Study on Ise 100 Index
Authors: U. Şendurur, B. Nazlıoğlu
Abstract:
Using Turkish data, in this study it is investigated that whether a firm’s ownership structure has an impact on its stock prices after the crisis. A linear regression model is conducted on the data of non-financial firms that are trading in Istanbul Stock Exchange 100 Index (ISE 100) index. The findings show that, all explanatory variables such as inside ownership, largest ownership, concentrated ownership, foreign shareholders, family controlled and dispersed ownership are not very important to explain stock prices after the crisis. Family controlled firms and concentrated ownership is positively related to stock price, dispersed ownership, largest ownership, foreign shareholders, and inside ownership structures have negative interaction between stock prices, but because of the p value is not under the value of 0.05 this relation is not significant. In addition, the analysis shows that, the shares of firms that have inside, largest and dispersed ownership structure are outperform comparing with the other firms. Furthermore, ownership concentrated firms outperform to family controlled firms.
Keywords: Financial crisis, ISE 100 Index, Ownership structure, Stock price.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16625117 Thermal Comfort and Energy Saving Evaluation of a Combined System in an Office Room Using Displacement Ventilation
Authors: A. Q. Ahmed, S. Gao
Abstract:
In this paper, the energy saving and human thermal comfort in a typical office room are investigated. The impact of a combined system of exhaust inlet air with light slots located at the ceiling level in a room served by displacement ventilation system is numerically modelled. Previous experimental data are used to validate the Computational Fluid Dynamic (CFD) model. A case study of simulated office room includes two seating occupants, two computers, two data loggers and four lamps. The combined system is located at the ceiling level above the heat sources. A new method of calculation for the cooling coil load in Stratified Air Distribution (STRAD) system is used in this study. The results show that 47.4% energy saving of space cooling load can be achieved by combing the exhaust inlet air with light slots at the ceiling level above the heat sources.Keywords: Air conditioning, Displacement ventilation, Energy saving, Thermal comfort.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20185116 Online Forums Hotspot Detection and Analysis Using Aging Theory
Authors: K. Nirmala Devi, V. Murali Bhaskaran
Abstract:
The exponential growth of social media arouses much attention on public opinion information. The online forums, blogs, micro blogs are proving to be extremely valuable resources and are having bulk volume of information. However, most of the social media data is unstructured and semi structured form. So that it is more difficult to decipher automatically. Therefore, it is very much essential to understand and analyze those data for making a right decision. The online forums hotspot detection is a promising research field in the web mining and it guides to motivate the user to take right decision in right time. The proposed system consist of a novel approach to detect a hotspot forum for any given time period. It uses aging theory to find the hot terms and E-K-means for detecting the hotspot forum. Experimental results demonstrate that the proposed approach outperforms k-means for detecting the hotspot forums with the improved accuracy.
Keywords: Hotspot forums, Micro blog, Blog, Sentiment Analysis, Opinion Mining, Social media, Twitter, Web mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21885115 A Study on Fatigue Performance of Asphalt Using AMPT
Authors: Yuan Jie Kelvin Lu, Amin Chegenizadeh
Abstract:
Asphalt pavement itself is a mixture made up of mainly aggregates, binders, and fillers that acts as a composition used for pavement construction. An experimental program was setup to determine the fatigue performance test of Asphalt with three different grades of conventional binders. Asphalt specimen has achieved the maximum optimum bulk density and air voids with a consistent bulk density of 2.3 t/m3, with an air void of 5% ± 0.5, before loading into the Asphalt Mixture Performance Tested (AMPT) for fatigue test. The number of cycles is defined as the point where phase angle drops, which is caused by the formation of cracks due to the increasing micro cracks when asphalt is undergoing repeated cycles of loading. Thus, the data collected are analyzed using the drop of phase angle as failure criteria. Based in the data analyzed, it is evident that the fatigue life of asphalt lies on the grade of binder. The result obtained shows that all specimens do experience a drop in phase angle due to macro cracks in the asphalt specimen.Keywords: Asphalt binder, AMPT, CX test, simplified–viscoelastic continuum damage (S-VECD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21525114 Data Embedding Based on Better Use of Bits in Image Pixels
Authors: Rehab H. Alwan, Fadhil J. Kadhim, Ahmad T. Al-Taani
Abstract:
In this study, a novel approach of image embedding is introduced. The proposed method consists of three main steps. First, the edge of the image is detected using Sobel mask filters. Second, the least significant bit LSB of each pixel is used. Finally, a gray level connectivity is applied using a fuzzy approach and the ASCII code is used for information hiding. The prior bit of the LSB represents the edged image after gray level connectivity, and the remaining six bits represent the original image with very little difference in contrast. The proposed method embeds three images in one image and includes, as a special case of data embedding, information hiding, identifying and authenticating text embedded within the digital images. Image embedding method is considered to be one of the good compression methods, in terms of reserving memory space. Moreover, information hiding within digital image can be used for security information transfer. The creation and extraction of three embedded images, and hiding text information is discussed and illustrated, in the following sections.
Keywords: Image embedding, Edge detection, gray level connectivity, information hiding, digital image compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21515113 Computational Fluid Dynamics Simulation and Comparison of Flow through Mechanical Heart Valve Using Newtonian and Non-Newtonian Fluid
Authors: D. Šedivý, S. Fialová
Abstract:
The main purpose of this study is to show differences between the numerical solution of the flow through the artificial heart valve using Newtonian or non-Newtonian fluid. The simulation was carried out by a commercial computational fluid dynamics (CFD) package based on finite-volume method. An aortic bileaflet heart valve (Sorin Bicarbon) was used as a pattern for model of real heart valve replacement. Computed tomography (CT) was used to gain the accurate parameters of the valve. Data from CT were transferred in the commercial 3D designer, where the model for CFD was made. Carreau rheology model was applied as non-Newtonian fluid. Physiological data of cardiac cycle were used as boundary conditions. Outputs were taken the leaflets excursion from opening to closure and the fluid dynamics through the valve. This study also includes experimental measurement of pressure fields in ambience of valve for verification numerical outputs. Results put in evidence a favorable comparison between the computational solutions of flow through the mechanical heart valve using Newtonian and non-Newtonian fluid.Keywords: Computational modeling, dynamic mesh, mechanical heart valve, non-Newtonian fluid, SDOF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16295112 Using Data Mining Methodology to Build the Predictive Model of Gold Passbook Price
Authors: Chien-Hui Yang, Che-Yang Lin, Ya-Chen Hsu
Abstract:
Gold passbook is an investing tool that is especially suitable for investors to do small investment in the solid gold. The gold passbook has the lower risk than other ways investing in gold, but its price is still affected by gold price. However, there are many factors can cause influences on gold price. Therefore, building a model to predict the price of gold passbook can both reduce the risk of investment and increase the benefits. This study investigates the important factors that influence the gold passbook price, and utilize the Group Method of Data Handling (GMDH) to build the predictive model. This method can not only obtain the significant variables but also perform well in prediction. Finally, the significant variables of gold passbook price, which can be predicted by GMDH, are US dollar exchange rate, international petroleum price, unemployment rate, whole sale price index, rediscount rate, foreign exchange reserves, misery index, prosperity coincident index and industrial index.Keywords: Gold price, Gold passbook price, Group Method ofData Handling (GMDH), Regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22925111 An Agent-Based Modelling Simulation Approach to Calculate Processing Delay of GEO Satellite Payload
Authors: V. Vicente E. Mujica, Gustavo Gonzalez
Abstract:
The global coverage of broadband multimedia and internet-based services in terrestrial-satellite networks demand particular interests for satellite providers in order to enhance services with low latencies and high signal quality to diverse users. In particular, the delay of on-board processing is an inherent source of latency in a satellite communication that sometimes is discarded for the end-to-end delay of the satellite link. The frame work for this paper includes modelling of an on-orbit satellite payload using an agent model that can reproduce the properties of processing delays. In essence, a comparison of different spatial interpolation methods is carried out to evaluate physical data obtained by an GEO satellite in order to define a discretization function for determining that delay. Furthermore, the performance of the proposed agent and the development of a delay discretization function are together validated by simulating an hybrid satellite and terrestrial network. Simulation results show high accuracy according to the characteristics of initial data points of processing delay for Ku bands.Keywords: Terrestrial-satellite networks, latency, on-orbit satellite payload, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8925110 Powerful Tool to Expand Business Intelligence: Text Mining
Authors: Li Gao, Elizabeth Chang, Song Han
Abstract:
With the extensive inclusion of document, especially text, in the business systems, data mining does not cover the full scope of Business Intelligence. Data mining cannot deliver its impact on extracting useful details from the large collection of unstructured and semi-structured written materials based on natural languages. The most pressing issue is to draw the potential business intelligence from text. In order to gain competitive advantages for the business, it is necessary to develop the new powerful tool, text mining, to expand the scope of business intelligence. In this paper, we will work out the strong points of text mining in extracting business intelligence from huge amount of textual information sources within business systems. We will apply text mining to each stage of Business Intelligence systems to prove that text mining is the powerful tool to expand the scope of BI. After reviewing basic definitions and some related technologies, we will discuss the relationship and the benefits of these to text mining. Some examples and applications of text mining will also be given. The motivation behind is to develop new approach to effective and efficient textual information analysis. Thus we can expand the scope of Business Intelligence using the powerful tool, text mining.Keywords: Business intelligence, document warehouse, text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26655109 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)
Authors: Mingren Shi, Michael Renton
Abstract:
There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.
Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15885108 Historical and Future Rainfall Variations in Bangladesh
Authors: M. M. Hossain, M. Z. Hasan, M. Alauddin, S. Akhter
Abstract:
Climate change has become a major concern across the world as the intensity along with quantity of the rainfall, mean surface temperature and other climatic parameters have been changed not only in Bangladesh but also in the entire globe. Bangladesh has already experienced many natural hazards. Among them changing of rainfall pattern, erratic and heavy rainfalls are very common. But changes of rainfall pattern and its amount is still in question to some extent. This study aimed to unfold how the historical rainfalls varied over time and how would be their future trends. In this context, historical rainfall data (1975-2014) were collected from Bangladesh Metrological Department (BMD) and then a time series model was developed using Box-Jenkins algorithm in IBM SPSS to forecast the future rainfall. From the historical data analysis, this study revealed that the amount of rainfall decreased over the time and shifted to the post monsoons. Forecasted rainfall shows that the pre-monsoon and early monsoon will get drier in future whereas late monsoon and post monsoon will show huge fluctuations in rainfall magnitudes with temporal variations which means Bangladesh will get comparatively drier seasons in future which may be a serious problem for the country as it depends on agriculture.
Keywords: Monsoon, Pre-monsoon, rainfall, pattern, variations, IBM-SPSS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13395107 Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area
Authors: F. J. Moral García, P. Valiente González, F. López Rodríguez
Abstract:
Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.Keywords: Kriging, map, tropospheric ozone, variogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18725106 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength
Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos
Abstract:
Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.Keywords: Statistical slope stability analysis, Skew distributions, Probability of failure, Functions of random variables.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15505105 A Watermarking System Using the Wavelet Technique for Satellite Images
Authors: I. R. Farah, I. B. Ismail, M. B. Ahmed
Abstract:
The huge development of new technologies and the apparition of open communication system more and more sophisticated create a new challenge to protect digital content from piracy. Digital watermarking is a recent research axis and a new technique suggested as a solution to these problems. This technique consists in inserting identification information (watermark) into digital data (audio, video, image, databases...) in an invisible and indelible manner and in such a way not to degrade original medium-s quality. Moreover, we must be able to correctly extract the watermark despite the deterioration of the watermarked medium (i.e attacks). In this paper we propose a system for watermarking satellite images. We chose to embed the watermark into frequency domain, precisely the discrete wavelet transform (DWT). We applied our algorithm on satellite images of Tunisian center. The experiments show satisfying results. In addition, our algorithm showed an important resistance facing different attacks, notably the compression (JEPG, JPEG2000), the filtering, the histogram-s manipulation and geometric distortions such as rotation, cropping, scaling.Keywords: Digital data watermarking, Spatial Database, Satellite images, Discrete Wavelets Transform (DWT).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16825104 Political Information Exposures, Politicians- Perceptions, Political Attitudes and Political Participations among People in Bangkok Metropolitan Area
Authors: Pratoom Rekklang
Abstract:
The purposes of this study are to study political information exposure, politicians- perceptions, political attitudes and political participations among people in Bangkok Metropolitan Area. The sample consisted of 420 which were selected by using accidental sampling method. Questionnaires were administered to all of the respondents to obtain the data for this research. T-test, one-way ANOVA and Pearson-s correlation coefficient were used to analyze the data. The findings are as follows: The difference in gender, education, income and occupation has significantly effect upon political information exposures. The difference in age, income has significantly effect upon politicians- perceptions. The difference in income has significantly effect upon political attitudes. The difference in gender, income and occupation has significantly effect upon political participations. There were a significantly relations between political information exposures, political attitudes, political participations and between politicians- perceptions, political attitudes and political participations.Keywords: Political Information Exposures, Politicians' Perceptions, Political Attitudes, Political Participations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15575103 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis
Authors: Amir Hajian, Sepehr Damavandinejadmonfared
Abstract:
In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.
Keywords: Biometrics, finger vein recognition, Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19655102 Quantifying Mobility of Urban Inhabitant Based on Social Media Data
Authors: Yuyun, Fritz Akhmad Nuzir, Bart Julien Dewancker
Abstract:
Check-in locations on social media provide information about an individual’s location. The millions of units of data generated from these sites provide knowledge for human activity. In this research, we used a geolocation service and users’ texts posted on Twitter social media to analyze human mobility. Our research will answer the questions; what are the movement patterns of a citizen? And, how far do people travel in the city? We explore the people trajectory of 201,118 check-ins and 22,318 users over a period of one month in Makassar city, Indonesia. To accommodate individual mobility, the authors only analyze the users with check-in activity greater than 30 times. We used sampling method with a systematic sampling approach to assign the research sample. The study found that the individual movement shows a high degree of regularity and intensity in certain places. The other finding found that the average distance an urban inhabitant can travel per day is as far as 9.6 km.
Keywords: Mobility, check-in, distance, Twitter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8035101 A Novel Approach to Improve Users Search Goal in Web Usage Mining
Authors: R. Lokeshkumar, P. Sengottuvelan
Abstract:
Web mining is to discover and extract useful Information. Different users may have different search goals when they search by giving queries and submitting it to a search engine. The inference and analysis of user search goals can be very useful for providing an experience result for a user search query. In this project, we propose a novel approach to infer user search goals by analyzing search web logs. First, we propose a novel approach to infer user search goals by analyzing search engine query logs, the feedback sessions are constructed from user click-through logs and it efficiently reflect the information needed for users. Second we propose a preprocessing technique to clean the unnecessary data’s from web log file (feedback session). Third we propose a technique to generate pseudo-documents to representation of feedback sessions for clustering. Finally we implement k-medoids clustering algorithm to discover different user search goals and to provide a more optimal result for a search query based on feedback sessions for the user.Keywords: Data Preprocessing, Session Identification, Web log mining, Web Personalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20275100 The Impact of Post-Disaster Relocation on Community Solidarity: The Case of Post-Disaster Reconstruction after Typhoon Morakot in Taiwan
Authors: Tsung-Hsi Fu, Wan-I Lin, Jyh-Cherng Shieh
Abstract:
Typhoon Morakot hit Taiwan in 2009 and caused severe damages. The government employs a compulsory relocation strategy for post-disaster reconstruction. This study analyzes the impact of this strategy on community solidarity. It employs a multiple approach for data collection, including semi-structural interview, secondary data, and documentation. The results indicate that the government-s strategy for distributing housing has led to conflicts within the communities. In addition, the relocating process has stimulated tensions between victims of the disaster and those residents whose lands were chosen to be new sites for relocation. The government-s strategy of “collective relocation" also worsened community integration. In addition, the fact that a permanent housing community may accommodate people from different places also posts challenge for the development of new inter-personal relations in the communities. This study concludes by emphasizing the importance of bringing social, economic and cultural aspects into consideration for post-disaster relocation..Keywords: community solidarity, permanent housing, post-disaster reconstruction, relocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21455099 Statistical and Land Planning Study of Tourist Arrivals in Greece during 2005-2016
Authors: Dimitra Alexiou
Abstract:
During the last 10 years, in spite of the economic crisis, the number of tourists arriving in Greece has increased, particularly during the tourist season from April to October. In this paper, the number of annual tourist arrivals is studied to explore their preferences with regard to the month of travel, the selected destinations, as well the amount of money spent. The collected data are processed with statistical methods, yielding numerical and graphical results. From the computation of statistical parameters and the forecasting with exponential smoothing, useful conclusions are arrived at that can be used by the Greek tourism authorities, as well as by tourist organizations, for planning purposes for the coming years. The results of this paper and the computed forecast can also be used for decision making by private tourist enterprises that are investing in Greece. With regard to the statistical methods, the method of Simple Exponential Smoothing of time series of data is employed. The search for a best forecast for 2017 and 2018 provides the value of the smoothing coefficient. For all statistical computations and graphics Microsoft Excel is used.
Keywords: Tourism, statistical methods, exponential smoothing, land spatial planning, economy, Microsoft Excel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7145098 Practical Applications and Connectivity Algorithms in Future Wireless Sensor Networks
Authors: Mohamed K. Watfa
Abstract:
Like any sentient organism, a smart environment relies first and foremost on sensory data captured from the real world. The sensory data come from sensor nodes of different modalities deployed on different locations forming a Wireless Sensor Network (WSN). Embedding smart sensors in humans has been a research challenge due to the limitations imposed by these sensors from computational capabilities to limited power. In this paper, we first propose a practical WSN application that will enable blind people to see what their neighboring partners can see. The challenge is that the actual mapping between the input images to brain pattern is too complex and not well understood. We also study the connectivity problem in 3D/2D wireless sensor networks and propose distributed efficient algorithms to accomplish the required connectivity of the system. We provide a new connectivity algorithm CDCA to connect disconnected parts of a network using cooperative diversity. Through simulations, we analyze the connectivity gains and energy savings provided by this novel form of cooperative diversity in WSNs.Keywords: Wireless Sensor Networks, Pervasive Computing, Eye Vision Application, 3D Connectivity, Clusters, Energy Efficient, Cooperative diversity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16305097 The Management Accountant’s Roles for Creation of Corporate Shared Value
Authors: Prateep Wajeetongratana
Abstract:
This study investigates the management accountant’s roles that link with the creation of corporate shared value to enable more effective decision-making and improve the information needs of stakeholders. Mixed method is employed to collect using triangulation for credibility. A quantitative approach is employed to conduct a survey of 200 Thai companies providing annual reports in the Stock Exchange of Thailand. The results of the study reveal that environmental and social data incorporated in a corporate social responsibility (CSR) disclosure are based on the indicators of the Global Reporting Initiatives (GRI) at a statistically significant level of 0.01. Environmental and social indicators in CSR are associated with environmental and social data disclosed in the annual report to support stakeholders’ and the public’s interests that are addressed and show that a significant relationship between environmental and social in CSR disclosures and the information in annual reports is statistically significant at the 0.01 level.
Keywords: Corporate social responsibility, creating shared value, management accountant’s roles, stock exchange of Thailand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9995096 A Meta-Model for Tubercle Design of Wing Planforms Inspired by Humpback Whale Flippers
Authors: A. Taheri
Abstract:
Inspired by topology of humpback whale flippers, a meta-model is designed for wing planform design. The net is trained based on experimental data using cascade-forward artificial neural network (ANN) to investigate effects of the amplitude and wavelength of sinusoidal leading edge configurations on the wing performance. Afterwards, the trained ANN is coupled with a genetic algorithm method towards an optimum design strategy. Finally, flow physics of the problem for an optimized rectangular planform and also a real flipper geometry planform is simulated using Lam-Bremhorst low Reynolds number turbulence model with damping wall-functions resolving to the wall. Lift and drag coefficients and also details of flow are presented along with comparisons to available experimental data. Results show that the proposed strategy can be adopted with success as a fast-estimation tool for performance prediction of wing planforms with wavy leading edge at preliminary design phase.
Keywords: Humpback whale flipper, cascade-forward ANN, GA, CFD, Bionics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36615095 Electromyography Pattern Classification with Laplacian Eigenmaps in Human Running
Authors: Elnaz Lashgari, Emel Demircan
Abstract:
Electromyography (EMG) is one of the most important interfaces between humans and robots for rehabilitation. Decoding this signal helps to recognize muscle activation and converts it into smooth motion for the robots. Detecting each muscle’s pattern during walking and running is vital for improving the quality of a patient’s life. In this study, EMG data from 10 muscles in 10 subjects at 4 different speeds were analyzed. EMG signals are nonlinear with high dimensionality. To deal with this challenge, we extracted some features in time-frequency domain and used manifold learning and Laplacian Eigenmaps algorithm to find the intrinsic features that represent data in low-dimensional space. We then used the Bayesian classifier to identify various patterns of EMG signals for different muscles across a range of running speeds. The best result for vastus medialis muscle corresponds to 97.87±0.69 for sensitivity and 88.37±0.79 for specificity with 97.07±0.29 accuracy using Bayesian classifier. The results of this study provide important insight into human movement and its application for robotics research.
Keywords: Electrocardiogram, manifold learning, Laplacian Eigenmaps, running pattern.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11265094 Ensembling Adaptively Constructed Polynomial Regression Models
Authors: Gints Jekabsons
Abstract:
The approach of subset selection in polynomial regression model building assumes that the chosen fixed full set of predefined basis functions contains a subset that is sufficient to describe the target relation sufficiently well. However, in most cases the necessary set of basis functions is not known and needs to be guessed – a potentially non-trivial (and long) trial and error process. In our research we consider a potentially more efficient approach – Adaptive Basis Function Construction (ABFC). It lets the model building method itself construct the basis functions necessary for creating a model of arbitrary complexity with adequate predictive performance. However, there are two issues that to some extent plague the methods of both the subset selection and the ABFC, especially when working with relatively small data samples: the selection bias and the selection instability. We try to correct these issues by model post-evaluation using Cross-Validation and model ensembling. To evaluate the proposed method, we empirically compare it to ABFC methods without ensembling, to a widely used method of subset selection, as well as to some other well-known regression modeling methods, using publicly available data sets.Keywords: Basis function construction, heuristic search, modelensembles, polynomial regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16755093 Learners’ Perceptions of Tertiary Level Teachers’ Code Switching: A Vietnamese Perspective
Authors: Hoa Pham
Abstract:
The literature on language teaching and second language acquisition has been largely driven by monolingual ideology with a common assumption that a second language (L2) is best taught and learned in the L2 only. The current study challenges this assumption by reporting learners' positive perceptions of tertiary level teachers' code switching practices in Vietnam. The findings of this study contribute to our understanding of code switching practices in language classrooms from a learners' perspective. Data were collected from student participants who were working towards a Bachelor degree in English within the English for Business Communication stream through the use of focus group interviews. The literature has documented that this method of interviewing has a number of distinct advantages over individual student interviews. For instance, group interactions generated by focus groups create a more natural environment than that of an individual interview because they include a range of communicative processes in which each individual may influence or be influenced by others - as they are in their real life. The process of interaction provides the opportunity to obtain the meanings and answers to a problem that are "socially constructed rather than individually created" leading to the capture of real-life data. The distinct feature of group interaction offered by this technique makes it a powerful means of obtaining deeper and richer data than those from individual interviews. The data generated through this study were analysed using a constant comparative approach. Overall, the students expressed positive views of this practice indicating that it is a useful teaching strategy. Teacher code switching was seen as a learning resource and a source supporting language output. This practice was perceived to promote student comprehension and to aid the learning of content and target language knowledge. This practice was also believed to scaffold the students' language production in different contexts. However, the students indicated their preference for teacher code switching to be constrained, as extensive use was believed to negatively impact on their L2 learning and trigger cognitive reliance on the L1 for L2 learning. The students also perceived that when the L1 was used to a great extent, their ability to develop as autonomous learners was negatively impacted. This study found that teacher code switching was supported in certain contexts by learners, thus suggesting that there is a need for the widespread assumption about the monolingual teaching approach to be re-considered.Keywords: Code switching, L1 use, L2 teaching, Learners’ perception.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2506