Search results for: location based data
44130 Topological Sensitivity Analysis for Reconstruction of the Inverse Source Problem from Boundary Measurement
Authors: Maatoug Hassine, Mourad Hrizi
Abstract:
In this paper, we consider a geometric inverse source problem for the heat equation with Dirichlet and Neumann boundary data. We will reconstruct the exact form of the unknown source term from additional boundary conditions. Our motivation is to detect the location, the size and the shape of source support. We present a one-shot algorithm based on the Kohn-Vogelius formulation and the topological gradient method. The geometric inverse source problem is formulated as a topology optimization one. A topological sensitivity analysis is derived from a source function. Then, we present a non-iterative numerical method for the geometric reconstruction of the source term with unknown support using a level curve of the topological gradient. Finally, we give several examples to show the viability of our presented method.Keywords: geometric inverse source problem, heat equation, topological optimization, topological sensitivity, Kohn-Vogelius formulation
Procedia PDF Downloads 30044129 Response of Canola Traits to Integrated Fertilization Systems
Authors: Khosro Mohammadi
Abstract:
In order to study the effect of different resources of farmyard manure, compost and biofertilizers on grain yield and quality of canola (Talaieh cultivar), an experiment was conducted at Kurdistan region. Experimental units were arranged in split-split plots design based on randomized complete blocks with three replications. Main plots consisted of two locations with difference in soil texture (L1): Agricultural Research Center of Sanandaj and (L2): Islamic Azad University of Sanandaj, as location levels. Also, five strategies for obtaining the base fertilizer requirement including (N1): farmyard manure; (N2): compost; (N3): chemical fertilizers; (N4): farm yard manure + compost and (N5): farm yard manure + compost + chemical fertilizers were considered in split plots. Four levels of biofertilizers were (B1): Bacillus lentus and Pseudomonas putida; (B2): Trichoderma harzianum; (B3): Bacillus lentus and Pseudomonas putida & Trichoderma harzianum; and (B4): control. Results showed that location, different resources of fertilizer and interactions of them have a significant effect on grain yield. The highest grain yield (4660 kg/ha) was obtained from treatment, that farmyard manure, compost and biofertilizers were co application in clay loam soil (Gerizeh station). Different methods of fertilization have a significant effect on leaf chlorophyll. Highest amount of chlorophyll (38 Spad) was obtained from co application of farmyard manure, chemical fertilizers and compost (N5 treatment). Location, basal fertilizers and biofertilizers have a significant effect on N, S and N/S of canola seed. Oil content was decreased in Gerizeh station, but oil yield had a significant increasing than Azad University station. Co application of compost and farmyard manure produced highest percent of oleic acid (61.5 %) and linoleic acid (22.9 %). Co application of compost and farmyard manure has a significant increase in oleic acid and linoleic acid. Finally, L1N5B3 treatment, that compost, farmyard manure and biofertilizers were co application in Gerizeh station in compare to other treatments, selected as a best treatment of experiment.Keywords: soil texture, organic fertilizer, chemical fertilizer, oil, Canola
Procedia PDF Downloads 40344128 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 14544127 Survivable IP over WDM Network Design Based on 1 ⊕ 1 Network Coding
Authors: Nihed Bahria El Asghar, Imen Jouili, Mounir Frikha
Abstract:
Inter-datacenter transport network is very bandwidth and delay demanding. The data transferred over such a network is also highly QoS-exigent mostly because a huge volume of data should be transported transparently with regard to the application user. To avoid the data transfer failure, a backup path should be reserved. No re-routing delay should be observed. A dedicated 1+1 protection is however not applicable in inter-datacenter transport network because of the huge spare capacity. In this context, we propose a survivable virtual network with minimal backup based on network coding (1 ⊕ 1) and solve it using a modified Dijkstra-based heuristic.Keywords: network coding, dedicated protection, spare capacity, inter-datacenters transport network
Procedia PDF Downloads 44744126 Verbal Prefix Selection in Old Japanese: A Corpus-Based Study
Authors: Zixi You
Abstract:
There are a number of verbal prefixes in Old Japanese. However, the selection or the compatibility of verbs and verbal prefixes is among the least investigated topics on Old Japanese language. Unlike other types of prefixes, verbal prefixes in dictionaries are more often than not listed with very brief information such as ‘unknown meaning’ or ‘rhythmic function only’. To fill in a part of this knowledge gap, this paper presents an exhaustive investigation based on the newly developed ‘Oxford Corpus of Old Japanese’ (OCOJ), which included nearly all existing resource of Old Japanese language, with detailed linguistics information in TEI-XML tags. In this paper, we propose the possibility that the following three prefixes, i-, sa-, ta- (with ta- being considered as a variation of sa-), are relevant to split intransitivity in Old Japanese, with evidence that unergative verbs favor i- and that unergative verbs favor sa-(ta-). This might be undermined by the fact that transitives are also found to follow i-. However, with several manifestations of split intransitivity in Old Japanese discussed, the behavior of transitives in verbal prefix selection is no longer as surprising as it may seem to be when one look at the selection of verbal prefix in isolation. It is possible that there are one or more features that played essential roles in determining the selection of i-, and the attested transitive verbs happen to have these features. The data suggest that this feature is a sense of ‘change’ of location or state involved in the event donated by the verb, which is a feature of typical unaccusatives. This is further discussed in the ‘affectedness’ hierarchy. The presentation of this paper, which includes a brief demonstration of the OCOJ, is expected to be of the interest of both specialists and general audiences.Keywords: old Japanese, split intransitivity, unaccusatives, unergatives, verbal prefix selection
Procedia PDF Downloads 41544125 The Impact of COVID-19 on Childhood Academic Functioning and Anxiety: A Literature Review
Authors: Lindsey Giunta
Abstract:
This review examines the current literature regarding the impact of COVID-19 on academic functioning and anxiety in children and adolescents. The objective was to determine the ways in which the pandemic affected youth mental health and academics, in addition to the extent that these factors were transformed as a result of the worldwide state of affairs. Twenty papers were selected and reviewed, and data showed long term consequences in youth mental health resulting from the current pandemic. The COVID-19 pandemic and its associated lockdowns led to disrupted childhood education, and data showed that the growth of cognitive executive functions was impacted to varying degrees dependent upon geographic location. The literature recommends supplemental education on the national level, as well as mental health promotion within communities and schools.Keywords: pandemic, children, adolescents, anxiety, academic functioning
Procedia PDF Downloads 15444124 Change to the Location/Ownership and Control of Liquid Metering Skids
Authors: Mahmoud Jumah
Abstract:
This paper presents the circumstances and decision making in case of change management in any industrial processes, and the effective strategic planning ensured to provide with the on time completion of projects. In this specific case, the Front End Engineering Design and the awarded Lump Sum Turn Key Contract had provided for full control and ownership of all Liquid Metering Skids by Controlling Team. The demarcation and location were changed, and the Ownership and Control of the Liquid Metering Skids inside the boundaries of the Asset Owner were transferred from Controlling Team to Asset Owner after the award of the LSTK Contract. The requested changes resulted in Adjustment Order and the relevant scope of work is an essential part of the original Contract. The majority of equipment and materials (i.e. liquid metering skids, valves, piping, etc.) has already been in process.Keywords: critical path, project change management, stakeholders problem solving, strategic planning
Procedia PDF Downloads 26744123 Determination of Hydrocarbon Path Migration from Gravity Data Analysis (Ghadames Basin, Southern Tunisia, North Africa)
Authors: Mohamed Dhaoui, Hakim Gabtni
Abstract:
The migration of hydrocarbons is a fairly complicated process that depends on several parameters, both structural and sedimentological. In this study, we will try to determine secondary migration paths which convey hydrocarbon from their main source rock to the largest reservoir of the Paleozoic petroleum system of the Tunisian part of Ghadames basin. In fact, The Silurian source rock is the main source rock of the Paleozoic petroleum system of the Ghadames basin. However, the most solicited reservoir in this area is the Triassic reservoir TAGI (Trias Argilo-Gréseux Inférieur). Several geochemical studies have confirmed that oil products TAGI come mainly from the Tannezuft Silurian source rock. That being said that secondary migration occurs through the fault system which affects the post-Silurian series. Our study is based on analysis and interpretation of gravity data. The gravity modeling was conducted in the northern part of Ghadames basin and the Telemzane uplift. We noted that there is a close relationship between the location of producing oil fields and gravity gradients which separate the positive and negative gravity anomalies. In fact, the analysis and transformation of the Bouguer anomaly map, and the residual gravity map allowed as understanding the architecture of the Precambrian in the study area, thereafter gravimetric models were established allowed to determine the probable migration path.Keywords: basement, Ghadames, gravity, hydrocarbon, migration path
Procedia PDF Downloads 36744122 The Identification of Environmentally Friendly People: A Case of South Sumatera Province, Indonesia
Authors: Marpaleni
Abstract:
The intergovernmental Panel on Climate Change (IPCC) declared in 2007 that global warming and climate change are not just a series of events caused by nature, but rather caused by human behaviour. Thus, to reduce the impact of human activities on climate change it is required to have information about how people respond to the environmental issues and what constraints they face. However, information on these and other phenomena remains largely missing, or not fully integrated within the existing data systems. The proposed study is aimed at filling the gap in this knowledge by focusing on Environmentally Friendly Behaviour (EFB) of the people of Indonesia, by taking the province of South Sumatera as a case of study. EFB is defined as any activity in which people engage to improve the conditions of the natural resources and/or to diminish the impact of their behaviour on the environment. This activity is measured in terms of consumption in five areas at the household level, namely housing, energy, water usage, recycling and transportation. By adopting the Indonesia’s Environmentally Friendly Behaviour conducted by Statistics Indonesia in 2013, this study aims to precisely identify one’s orientation towards EFB based on socio demographic characteristics such as: age, income, occupation, location, education, gender and family size. The results of this research will be useful to precisely identify what support people require to strengthen their EFB, to help identify specific constraints that different actors and groups face and to uncover a more holistic understanding of EFB in relation to particular demographic and socio-economics contexts. As the empirical data are examined from the national data sample framework, which will continue to be collected, it can be used to forecast and monitor the future of EFB.Keywords: environmentally friendly behavior, demographic, South Sumatera, Indonesia
Procedia PDF Downloads 28544121 Cloud Design for Storing Large Amount of Data
Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás
Abstract:
Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization
Procedia PDF Downloads 35344120 Pantograph-Catenary Contact Force: Features Evaluation for Catenary Diagnostics
Authors: Mehdi Brahimi, Kamal Medjaher, Noureddine Zerhouni, Mohammed Leouatni
Abstract:
The Prognostics and Health Management is a system engineering discipline which provides solutions and models to the implantation of a predictive maintenance. The approach is based on extracting useful information from monitoring data to assess the “health” state of an industrial equipment or an asset. In this paper, we examine multiple extracted features from Pantograph-Catenary contact force in order to select the most relevant ones to achieve a diagnostics function. The feature extraction methodology is based on simulation data generated thanks to a Pantograph-Catenary simulation software called INPAC and measurement data. The feature extraction method is based on both statistical and signal processing analyses. The feature selection method is based on statistical criteria.Keywords: catenary/pantograph interaction, diagnostics, Prognostics and Health Management (PHM), quality of current collection
Procedia PDF Downloads 29044119 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census
Authors: Jaroslav Kraus
Abstract:
Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.Keywords: census, geo-demography, households, the Czech Republic
Procedia PDF Downloads 9644118 Expanding the Evaluation Criteria for a Wind Turbine Performance
Authors: Ivan Balachin, Geanette Polanco, Jiang Xingliang, Hu Qin
Abstract:
The problem of global warming raised up interest towards renewable energy sources. To reduce cost of wind energy is a challenge. Before building of wind park conditions such as: average wind speed, direction, time for each wind, probability of icing, must be considered in the design phase. Operation values used on the setting of control systems also will depend on mentioned variables. Here it is proposed a procedure to be include in the evaluation of the performance of a wind turbine, based on the amplitude of wind changes, the number of changes and their duration. A generic study case based on actual data is presented. Data analysing techniques were applied to model the power required for yaw system based on amplitude and data amount of wind changes. A theoretical model between time, amplitude of wind changes and angular speed of nacelle rotation was identified.Keywords: field data processing, regression determination, wind turbine performance, wind turbine placing, yaw system losses
Procedia PDF Downloads 39044117 Conceptualizing the Knowledge to Manage and Utilize Data Assets in the Context of Digitization: Case Studies of Multinational Industrial Enterprises
Authors: Martin Böhmer, Agatha Dabrowski, Boris Otto
Abstract:
The trend of digitization significantly changes the role of data for enterprises. Data turn from an enabler to an intangible organizational asset that requires management and qualifies as a tradeable good. The idea of a networked economy has gained momentum in the data domain as collaborative approaches for data management emerge. Traditional organizational knowledge consequently needs to be extended by comprehensive knowledge about data. The knowledge about data is vital for organizations to ensure that data quality requirements are met and data can be effectively utilized and sovereignly governed. As this specific knowledge has been paid little attention to so far by academics, the aim of the research presented in this paper is to conceptualize it by proposing a “data knowledge model”. Relevant model entities have been identified based on a design science research (DSR) approach that iteratively integrates insights of various industry case studies and literature research.Keywords: data management, digitization, industry 4.0, knowledge engineering, metamodel
Procedia PDF Downloads 35644116 Feature Weighting Comparison Based on Clustering Centers in the Detection of Diabetic Retinopathy
Authors: Kemal Polat
Abstract:
In this paper, three feature weighting methods have been used to improve the classification performance of diabetic retinopathy (DR). To classify the diabetic retinopathy, features extracted from the output of several retinal image processing algorithms, such as image-level, lesion-specific and anatomical components, have been used and fed them into the classifier algorithms. The dataset used in this study has been taken from University of California, Irvine (UCI) machine learning repository. Feature weighting methods including the fuzzy c-means clustering based feature weighting, subtractive clustering based feature weighting, and Gaussian mixture clustering based feature weighting, have been used and compered with each other in the classification of DR. After feature weighting, five different classifier algorithms comprising multi-layer perceptron (MLP), k- nearest neighbor (k-NN), decision tree, support vector machine (SVM), and Naïve Bayes have been used. The hybrid method based on combination of subtractive clustering based feature weighting and decision tree classifier has been obtained the classification accuracy of 100% in the screening of DR. These results have demonstrated that the proposed hybrid scheme is very promising in the medical data set classification.Keywords: machine learning, data weighting, classification, data mining
Procedia PDF Downloads 32644115 Design and Development of a Platform for Analyzing Spatio-Temporal Data from Wireless Sensor Networks
Authors: Walid Fantazi
Abstract:
The development of sensor technology (such as microelectromechanical systems (MEMS), wireless communications, embedded systems, distributed processing and wireless sensor applications) has contributed to a broad range of WSN applications which are capable of collecting a large amount of spatiotemporal data in real time. These systems require real-time data processing to manage storage in real time and query the data they process. In order to cover these needs, we propose in this paper a Snapshot spatiotemporal data model based on object-oriented concepts. This model allows saving storing and reducing data redundancy which makes it easier to execute spatiotemporal queries and save analyzes time. Further, to ensure the robustness of the system as well as the elimination of congestion from the main access memory we propose a spatiotemporal indexing technique in RAM called Captree *. As a result, we offer an RIA (Rich Internet Application) -based SOA application architecture which allows the remote monitoring and control.Keywords: WSN, indexing data, SOA, RIA, geographic information system
Procedia PDF Downloads 25444114 Comparative Study between Inertial Navigation System and GPS in Flight Management System Application
Authors: Othman Maklouf, Matouk Elamari, M. Rgeai, Fateh Alej
Abstract:
In modern avionics the main fundamental component is the flight management system (FMS). An FMS is a specialized computer system that automates a wide variety of in-flight tasks, reducing the workload on the flight crew to the point that modern civilian aircraft no longer carry flight engineers or navigators. The main function of the FMS is in-flight management of the flight plan using various sensors such as Global Positioning System (GPS) and Inertial Navigation System (INS) to determine the aircraft's position and guide the aircraft along the flight plan. GPS which is satellite based navigation system, and INS which generally consists of inertial sensors (accelerometers and gyroscopes). GPS is used to locate positions anywhere on earth, it consists of satellites, control stations, and receivers. GPS receivers take information transmitted from the satellites and uses triangulation to calculate a user’s exact location. The basic principle of an INS is based on the integration of accelerations observed by the accelerometers on board the moving platform, the system will accomplish this task through appropriate processing of the data obtained from the specific force and angular velocity measurements. Thus, an appropriately initialized inertial navigation system is capable of continuous determination of vehicle position, velocity and attitude without the use of the external information. The main objective of article is to introduce a comparative study between the two systems under different conditions and scenarios using MATLAB with SIMULINK software.Keywords: flight management system, GPS, IMU, inertial navigation system
Procedia PDF Downloads 29944113 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 13944112 A Case Study on the Tourists' Satisfaction: Local Gastronomy in Pagudpud, Ilocos Norte
Authors: Reysand Mae A. Abapial, Christine Claire Z. Agra, Quenna Lyn V. De Guzman, Marielle Arianne Joyce Q. Hojilla, John Joseph A. Tiangco
Abstract:
The study focused on the assessment of the tourists’ satisfaction on the local gastronomy in Pagudpud, Ilocos Norte as a tourist destination as perceived by 100 tourists visiting the tourist destination, which is determined through convenient random sampling. Mean, percentage frequency and Wilcoxon rank sum test were used in the collection of data. The results revealed that the tourists agree that the local establishments offering local cuisines are accessible in terms of the location, internet visibility and facilities for persons-with-disabilities. The tourist are also willing to pay for the local food because it is attainable, budget-friendly, worthy for an expensive price, satisfies the cravings, reflects the physical appearance of the establishment and its quantity is reasonable based on the price. However, the tourists disagree that the local food completes their overall experience as tourists and it does not have the potential to satisfy all types of tourists. Recommendations for the enhancement of the local cuisine and implications for future research are discussed.Keywords: gastronomy, local gastronomy, tourist satisfaction, Pagudpud
Procedia PDF Downloads 67144111 Understanding the Top Questions Asked about Hong Kong by Travellers Worldwide through a Corpus-Based Discourse Analytic Approach
Authors: Phoenix W. Y. Lam
Abstract:
As one of the most important service-oriented industries in contemporary society, tourism has increasingly seen the influence of the Internet on all aspects of travelling. Travellers nowadays habitually research online before making travel-related decisions. One platform on which such research is conducted is destination forums. The emergence of such online destination forums in the last decade has allowed tourists to share their travel experiences quickly and easily with a large number of online users around the world. As such, these destination forums also provide invaluable data for tourism bodies to better understand travellers’ views on their destinations. Collecting posts from the Hong Kong travel forum on the world’s largest travel website TripAdvisor®, the present study identifies the top questions asked by TripAdvisor users about Hong Kong through a corpus-based discourse analytic approach. Based on questions posted on the forum and their associated meta-data gathered in a one-year period, the study examines the top questions asked by travellers around the world to identify the key geographical locations in which users have shown the greatest interest in the city. Questions raised by travellers from different geographical locations are also compared to see if traveller communities by location vary in terms of their areas of interest. This analysis involves the study of key words and concordance of frequently-occurring items and a close reading of representative examples in context. Findings from the present study show that travellers who asked the most questions about Hong Kong are from North America and Asia, and that travellers from different locations have different concerns and interests, which are clearly reflected in the language of the questions asked on the travel forum. These findings can therefore provide tourism organisations with useful information about the key markets that should be targeted for promotional purposes, and can also allow such organisations to design advertising campaigns which better address the specific needs of such markets. The present study thus demonstrates the value of applying linguistic knowledge and methodologies to the domain of tourism to address practical issues.Keywords: corpus, hong kong, online travel forum, tourism, TripAdvisor
Procedia PDF Downloads 17744110 Near Field Focusing Behaviour of Airborne Ultrasonic Phased Arrays Influenced by Airflows
Authors: D. Sun, T. F. Lu, A. Zander, M. Trinkle
Abstract:
This paper investigates the potential use of airborne ultrasonic phased arrays for imaging in outdoor environments as a means of overcoming the limitations experienced by kinect sensors, which may fail to work in the outdoor environments due to the oversaturation of the infrared photo diodes. Ultrasonic phased arrays have been well studied for static media, yet there appears to be no comparable examination in the literature of the impact of a flowing medium on the focusing behaviour of near field focused ultrasonic arrays. This paper presents a method for predicting the sound pressure fields produced by a single ultrasound element or an ultrasonic phased array influenced by airflows. The approach can be used to determine the actual focal point location of an array exposed in a known flow field. From the presented simulation results based upon this model, it can be concluded that uniform flows in the direction orthogonal to the acoustic propagation have a noticeable influence on the sound pressure field, which is reflected in the twisting of the steering angle of the array. Uniform flows in the same direction as the acoustic propagation have negligible influence on the array. For an array impacted by a turbulent flow, determining the location of the focused sound field becomes difficult due to the irregularity and continuously changing direction and the speed of the turbulent flow. In some circumstances, ultrasonic phased arrays impacted by turbulent flows may not be capable of producing a focused sound field.Keywords: airborne, airflow, focused sound field, ultrasonic phased array
Procedia PDF Downloads 34444109 Improved K-Means Clustering Algorithm Using RHadoop with Combiner
Authors: Ji Eun Shin, Dong Hoon Lim
Abstract:
Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.Keywords: big data, combiner, K-means clustering, RHadoop
Procedia PDF Downloads 43844108 Digital Design and Fabrication: A Review of Trend and Its Impact in the African Context
Authors: Mohamed Al Araby, Amany Salman, Mostafa Amin, Mohamed Madbully, Dalia Keraa, Mariam Ali, Marah Abdelfatah, Mariam Ahmed, Ahmed Hassab
Abstract:
In recent years, the architecture, engineering, and construction (A.E.C.) industry have been exposed to important innovations, most notably the global integration of digital design and fabrication (D.D.F.) processes in the industry’s workflow. Despite this evolution in that sector, Africa was excluded from the examination of this development. The reason behind this exclusion is the preconceived view of it as a developing region that still employs traditional methods of construction. The primary objective of this review is to investigate the trend of digital construction (D.C.) in the African environment and the difficulties in its regular utilization of it. This objective can be attained by recognizing the notion of distributed computing in Africa and evaluating the impact of the projects deploying this technology on both the immediate and broader contexts. The paper’s methodology begins with the collection of data from 224 initiatives throughout Africa. Then, 50 of these projects were selected based on the criteria of the project's recency, typology variety, and location diversity. After that, a literature-based comparative analysis was undertaken. This study’s findings reveal a pattern of motivation for applying digital fabrication processes. Moreover, it is essential to evaluate the socio-economic effects of these projects on the population living near the analyzed subject. The last step in this study is identifying the influence on the neighboring nations.Keywords: Africa, digital construction, digital design, fabrication
Procedia PDF Downloads 17944107 An Exploratory Study on the Integration of Neurodiverse University Students into Mainstream Learning and Their Performance: The Case of the Jones Learning Center
Authors: George Kassar, Phillip A. Cartwright
Abstract:
Based on data collected from The Jones Learning Center (JLC), University of the Ozarks, Arkansas, U.S., this study explores the impact of inclusive classroom practices on neuro-diverse college students’ and their consequent academic performance having participated in integrative therapies designed to support students who are intellectually capable of obtaining a college degree, but who require support for learning challenges owing to disabilities, AD/HD, or ASD. The purpose of this study is two-fold. The first objective is to explore the general process, special techniques, and practices of the (JLC) inclusive program. The second objective is to identify and analyze the effectiveness of the processes, techniques, and practices in supporting the academic performance of enrolled college students with learning disabilities following integration into mainstream university learning. Integrity, transparency, and confidentiality are vital in the research. All questions were shared in advance and confirmed by the concerned management at the JLC. While administering the questionnaire as well as conducted the interviews, the purpose of the study, its scope, aims, and objectives were clearly explained to all participants prior starting the questionnaire / interview. Confidentiality of all participants assured and guaranteed by using encrypted identification of individuals, thus limiting access to data to only the researcher, and storing data in a secure location. Respondents were also informed that their participation in this research is voluntary, and they may withdraw from it at any time prior to submission if they wish. Ethical consent was obtained from the participants before proceeding with videorecording of the interviews. This research uses a mixed methods approach. The research design involves collecting, analyzing, and “mixing” quantitative and qualitative methods and data to enable a research inquiry. The research process is organized based on a five-pillar approach. The first three pillars are focused on testing the first hypothesis (H1) directed toward determining the extent to the academic performance of JLC students did improve after involvement with comprehensive JLC special program. The other two pillars relate to the second hypothesis (H2), which is directed toward determining the extent to which collective and applied knowledge at JLC is distinctive from typical practices in the field. The data collected for research were obtained from three sources: 1) a set of secondary data in the form of Grade Point Average (GPA) received from the registrar, 2) a set of primary data collected throughout structured questionnaire administered to students and alumni at JLC, and 3) another set of primary data collected throughout interviews conducted with staff and educators at JLC. The significance of this study is two folds. First, it validates the effectiveness of the special program at JLC for college-level students who learn differently. Second, it identifies the distinctiveness of the mix of techniques, methods, and practices, including the special individualized and personalized one-on-one approach at JLC.Keywords: education, neuro-diverse students, program effectiveness, Jones learning center
Procedia PDF Downloads 7444106 Predicting of Hydrate Deposition in Loading and Offloading Flowlines of Marine CNG Systems
Authors: Esam I. Jassim
Abstract:
The main aim of this paper is to demonstrate the prediction of the model capability of predicting the nucleation process, the growth rate, and the deposition potential of second phase particles in gas flowlines. The primary objective of the research is to predict the risk hazards involved in the marine transportation of compressed natural gas. However, the proposed model can be equally used for other applications including production and transportation of natural gas in any high-pressure flow-line. The proposed model employs the following three main components to approach the problem: computational fluid dynamics (CFD) technique is used to configure the flow field; the nucleation model is developed and incorporated in the simulation to predict the incipient hydrate particles size and growth rate; and the deposition of the gas/particle flow is proposed using the concept of the particle deposition velocity. These components are integrated in a comprehended model to locate the hydrate deposition in natural gas flowlines. The present research is prepared to foresee the deposition location of solid particles that could occur in a real application in Compressed Natural Gas loading and offloading. A pipeline with 120 m length and different sizes carried a natural gas is taken in the study. The location of particle deposition formed as a result of restriction is determined based on the procedure mentioned earlier and the effect of water content and downstream pressure is studied. The critical flow speed that prevents such particle to accumulate in the certain pipe length is also addressed.Keywords: hydrate deposition, compressed natural gas, marine transportation, oceanography
Procedia PDF Downloads 48744105 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus
Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya
Abstract:
Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.Keywords: driverless vehicle, path planning, sensor fusion, state estimate
Procedia PDF Downloads 14444104 Static vs. Stream Mining Trajectories Similarity Measures
Authors: Musaab Riyadh, Norwati Mustapha, Dina Riyadh
Abstract:
Trajectory similarity can be defined as the cost of transforming one trajectory into another based on certain similarity method. It is the core of numerous mining tasks such as clustering, classification, and indexing. Various approaches have been suggested to measure similarity based on the geometric and dynamic properties of trajectory, the overlapping between trajectory segments, and the confined area between entire trajectories. In this article, an evaluation of these approaches has been done based on computational cost, usage memory, accuracy, and the amount of data which is needed in advance to determine its suitability to stream mining applications. The evaluation results show that the stream mining applications support similarity methods which have low computational cost and memory, single scan on data, and free of mathematical complexity due to the high-speed generation of data.Keywords: global distance measure, local distance measure, semantic trajectory, spatial dimension, stream data mining
Procedia PDF Downloads 39644103 Computer-Based versus Paper-Based Tests: A Comparative Study of Two Types of Indonesian National Examination for Senior High School Students
Authors: Faizal Mansyur
Abstract:
The objective of this research is to find out whether there is a significant difference in the English language scores of senior high school students in the Indonesia National Examination for students tested by using computer-based and paper-based tests. The population of this research is senior high school students in South Sulawesi Province who sat the Indonesian National Examination for 2015/2016 academic year. The samples of this research are 800 students’ scores from 8 schools taken by employing the multistage random sampling technique. The data of this research is a secondary data since it is obtained from the education office for South Sulawesi. In analyzing the collected data, the researcher employed the independent samples T-Test with the help of SPSS v.24 program. The finding of this research reveals that there is a significant difference in the English language scores of senior high school students in the Indonesia National Examination for students tested by using computer-based and paper-based Tests (p < .05). Moreover, students tested by using PBT (Mean = 63.13, SD = 13.63) achieve higher score than those tested by using CBT (Mean = 46.33, SD = 14.68).Keywords: computer-based test, paper-based test, Indonesian national examination, testing
Procedia PDF Downloads 16744102 Secure Content Centric Network
Authors: Syed Umair Aziz, Muhammad Faheem, Sameer Hussain, Faraz Idris
Abstract:
Content centric network is the network based on the mechanism of sending and receiving the data based on the interest and data request to the specified node (which has cached data). In this network, the security is bind with the content not with the host hence making it host independent and secure. In this network security is applied by taking content’s MAC (message authentication code) and encrypting it with the public key of the receiver. On the receiver end, the message is first verified and after verification message is saved and decrypted using the receiver's private key.Keywords: content centric network, client-server, host security threats, message authentication code, named data network, network caching, peer-to-peer
Procedia PDF Downloads 64444101 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease
Authors: Usama Ahmed
Abstract:
Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.Keywords: data mining, classification, diabetes, WEKA
Procedia PDF Downloads 147