Search results for: spatio-temporal data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24422

Search results for: spatio-temporal data

23552 Data and Biological Sharing Platforms in Community Health Programs: Partnership with Rural Clinical School, University of New South Wales and Public Health Foundation of India

Authors: Vivian Isaac, A. T. Joteeshwaran, Craig McLachlan

Abstract:

The University of New South Wales (UNSW) Rural Clinical School has a strategic collaborative focus on chronic disease and public health. Our objectives are to understand rural environmental and biological interactions in vulnerable community populations. The UNSW Rural Clinical School translational model is a spoke and hub network. This spoke and hub model connects rural data and biological specimens with city based collaborative public health research networks. Similar spoke and hub models are prevalent across research centers in India. The Australia-India Council grant was awarded so we could establish sustainable public health and community research collaborations. As part of the collaborative network we are developing strategies around data and biological sharing platforms between Indian Institute of Public Health, Public Health Foundation of India (PHFI), Hyderabad and Rural Clinical School UNSW. The key objective is to understand how research collaborations are conducted in India and also how data can shared and tracked with external collaborators such as ourselves. A framework to improve data sharing for research collaborations, including DNA was proposed as a project outcome. The complexities of sharing biological data has been investigated via a visit to India. A flagship sustainable project between Rural Clinical School UNSW and PHFI would illustrate a model of data sharing platforms.

Keywords: data sharing, collaboration, public health research, chronic disease

Procedia PDF Downloads 427
23551 Discrimination of Artificial Intelligence

Authors: Iman Abu-Rub

Abstract:

This research paper examines if Artificial Intelligence is, in fact, racist or not. Different studies from all around the world, and covering different communities were analyzed to further understand AI’s true implications over different communities. The black community, Asian community, and Muslim community were all analyzed and discussed in the paper to figure out if AI is biased or unbiased towards these specific communities. It was found that the biggest problem AI faces is the biased distribution of data collection. Most of the data inserted and coded into AI are of a white male, which significantly affects the other communities in terms of reliable cultural, political, or medical research. Nonetheless, there are various research was done that help increase awareness of this issue, but also solve it completely if done correctly. Governments and big corporations are able to implement different strategies into their AI inventions to avoid any racist results, which could cause hatred culturally but also unreliable data, medically, for example. Overall, Artificial Intelligence is not racist per se, but the data implementation and current racist culture online manipulate AI to become racist.

Keywords: social media, artificial intelligence, racism, discrimination

Procedia PDF Downloads 101
23550 A Neural Network Modelling Approach for Predicting Permeability from Well Logs Data

Authors: Chico Horacio Jose Sambo

Abstract:

Recently neural network has gained popularity when come to solve complex nonlinear problems. Permeability is one of fundamental reservoir characteristics system that are anisotropic distributed and non-linear manner. For this reason, permeability prediction from well log data is well suited by using neural networks and other computer-based techniques. The main goal of this paper is to predict reservoir permeability from well logs data by using neural network approach. A multi-layered perceptron trained by back propagation algorithm was used to build the predictive model. The performance of the model on net results was measured by correlation coefficient. The correlation coefficient from testing, training, validation and all data sets was evaluated. The results show that neural network was capable of reproducing permeability with accuracy in all cases, so that the calculated correlation coefficients for training, testing and validation permeability were 0.96273, 0.89991 and 0.87858, respectively. The generalization of the results to other field can be made after examining new data, and a regional study might be possible to study reservoir properties with cheap and very fast constructed models.

Keywords: neural network, permeability, multilayer perceptron, well log

Procedia PDF Downloads 375
23549 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and rough-sets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: rough-sets, classification, feature selection, entropy, outliers, frequent itemset mining

Procedia PDF Downloads 414
23548 Application of Regularized Spatio-Temporal Models to the Analysis of Remote Sensing Data

Authors: Salihah Alghamdi, Surajit Ray

Abstract:

Space-time data can be observed over irregularly shaped manifolds, which might have complex boundaries or interior gaps. Most of the existing methods do not consider the shape of the data, and as a result, it is difficult to model irregularly shaped data accommodating the complex domain. We used a method that can deal with space-time data that are distributed over non-planner shaped regions. The method is based on partial differential equations and finite element analysis. The model can be estimated using a penalized least squares approach with a regularization term that controls the over-fitting. The model is regularized using two roughness penalties, which consider the spatial and temporal regularities separately. The integrated square of the second derivative of the basis function is used as temporal penalty. While the spatial penalty consists of the integrated square of Laplace operator, which is integrated exclusively over the domain of interest that is determined using finite element technique. In this paper, we applied a spatio-temporal regression model with partial differential equations regularization (ST-PDE) approach to analyze a remote sensing data measuring the greenness of vegetation, measure by an index called enhanced vegetation index (EVI). The EVI data consist of measurements that take values between -1 and 1 reflecting the level of greenness of some region over a period of time. We applied (ST-PDE) approach to irregular shaped region of the EVI data. The approach efficiently accommodates the irregular shaped regions taking into account the complex boundaries rather than smoothing across the boundaries. Furthermore, the approach succeeds in capturing the temporal variation in the data.

Keywords: irregularly shaped domain, partial differential equations, finite element analysis, complex boundray

Procedia PDF Downloads 124
23547 Utilising an Online Data Collection Platform for the Development of a Community Engagement Database: A Case Study on Building Inter-Institutional Partnerships at UWC

Authors: P. Daniels, T. Adonis, P. September-Brown, R. Comalie

Abstract:

The community engagement unit at the University of the Western Cape was tasked with establishing a community engagement database. The database would store information of all community engagement projects related to the university. The wealth of knowledge obtained from the various disciplines would be used to facilitate interdisciplinary collaboration within the university, as well as facilitating community university partnership opportunities. The purpose of this qualitative study was to explore electronic data collection through the development of a database. Two types of electronic data collection platforms were used, namely online questionnaire and email. The semi structured questionnaire was used to collect data related to community engagement projects from different faculties and departments at the university. There are many benefits for using an electronic data collection platform, such as reduction of costs and time, ease in reaching large numbers of potential respondents, and the possibility of providing anonymity to participants. Despite all the advantages of using the electronic platform, there were as many challenges, as depicted in our findings. The findings suggest that certain barriers existed by using an electronic platform for data collection, even though it was in an academic environment, where knowledge and resources were in abundance. One of the challenges experienced in this process was the lack of dissemination of information via email to staff within faculties. The actual online software used for the questionnaire had its own limitations, such as only being able to access the questionnaire from the same electronic device. In a few cases, academics only completed the questionnaire after a telephonic prompt or face to face meeting about "Is higher education in South Africa ready to embrace electronic platform in data collection?"

Keywords: community engagement, database, data collection, electronic platform, electronic tools, knowledge sharing, university

Procedia PDF Downloads 249
23546 Women Entrepreneurial Resiliency Amidst COVID-19

Authors: Divya Juneja, Sukhjeet Kaur Matharu

Abstract:

Purpose: The paper is aimed at identifying the challenging factors experienced by the women entrepreneurs in India in operating their enterprises amidst the challenges posed by the COVID-19 pandemic. Methodology: The sample for the study comprised 396 women entrepreneurs from different regions of India. A purposive sampling technique was adopted for data collection. Data was collected through a self-administered questionnaire. Analysis was performed using the SPSS package for quantitative data analysis. Findings: The results of the study state that entrepreneurial characteristics, resourcefulness, networking, adaptability, and continuity have a positive influence on the resiliency of women entrepreneurs when faced with a crisis situation. Practical Implications: The findings of the study have some important implications for women entrepreneurs, organizations, government, and other institutions extending support to entrepreneurs.

Keywords: women entrepreneurs, analysis, data analysis, positive influence, resiliency

Procedia PDF Downloads 95
23545 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 29
23544 The Use of Voice in Online Public Access Catalog as Faster Searching Device

Authors: Maisyatus Suadaa Irfana, Nove Eka Variant Anna, Dyah Puspitasari Sri Rahayu

Abstract:

Technological developments provide convenience to all the people. Nowadays, the communication of human with the computer is done via text. With the development of technology, human and computer communications have been conducted with a voice like communication between human beings. It provides an easy facility for many people, especially those who have special needs. Voice search technology is applied in the search of book collections in the OPAC (Online Public Access Catalog), so library visitors will find it faster and easier to find books that they need. Integration with Google is needed to convert the voice into text. To optimize the time and the results of searching, Server will download all the book data that is available in the server database. Then, the data will be converted into JSON format. In addition, the incorporation of some algorithms is conducted including Decomposition (parse) in the form of array of JSON format, the index making, analyzer to the result. It aims to make the process of searching much faster than the usual searching in OPAC because the data are directly taken to the database for every search warrant. Data Update Menu is provided with the purpose to enable users perform their own data updates and get the latest data information.

Keywords: OPAC, voice, searching, faster

Procedia PDF Downloads 321
23543 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 140
23542 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data

Authors: Chen Chou, Feng-Tyan Lin

Abstract:

Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.

Keywords: Big Data, ITS, influence range, living area, central place theory, visualization

Procedia PDF Downloads 261
23541 Performance Analysis of Hierarchical Agglomerative Clustering in a Wireless Sensor Network Using Quantitative Data

Authors: Tapan Jain, Davender Singh Saini

Abstract:

Clustering is a useful mechanism in wireless sensor networks which helps to cope with scalability and data transmission problems. The basic aim of our research work is to provide efficient clustering using Hierarchical agglomerative clustering (HAC). If the distance between the sensing nodes is calculated using their location then it’s quantitative HAC. This paper compares the various agglomerative clustering techniques applied in a wireless sensor network using the quantitative data. The simulations are done in MATLAB and the comparisons are made between the different protocols using dendrograms.

Keywords: routing, hierarchical clustering, agglomerative, quantitative, wireless sensor network

Procedia PDF Downloads 575
23540 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images

Authors: Sophia Shi

Abstract:

Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.

Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG

Procedia PDF Downloads 113
23539 Qualitative Data Analysis for Health Care Services

Authors: Taner Ersoz, Filiz Ersoz

Abstract:

This study was designed enable application of multivariate technique in the interpretation of categorical data for measuring health care services satisfaction in Turkey. The data was collected from a total of 17726 respondents. The establishment of the sample group and collection of the data were carried out by a joint team from The Ministry of Health and Turkish Statistical Institute (Turk Stat) of Turkey. The multiple correspondence analysis (MCA) was used on the data of 2882 respondents who answered the questionnaire in full. The multiple correspondence analysis indicated that, in the evaluation of health services females, public employees, younger and more highly educated individuals were more concerned and complainant than males, private sector employees, older and less educated individuals. Overall 53 % of the respondents were pleased with the improvements in health care services in the past three years. This study demonstrates the public consciousness in health services and health care satisfaction in Turkey. It was found that most the respondents were pleased with the improvements in health care services over the past three years. Awareness of health service quality increases with education levels. Older individuals and males would appear to have lower expectancies in health services.

Keywords: multiple correspondence analysis, multivariate categorical data, health care services, health satisfaction survey

Procedia PDF Downloads 213
23538 Development of a Numerical Model to Predict Wear in Grouted Connections for Offshore Wind Turbine Generators

Authors: Paul Dallyn, Ashraf El-Hamalawi, Alessandro Palmeri, Bob Knight

Abstract:

In order to better understand the long term implications of the grout wear failure mode in large-diameter plain-sided grouted connections, a numerical model has been developed and calibrated that can take advantage of existing operational plant data to predict the wear accumulation for the actual load conditions experienced over a given period, thus limiting the need for expensive monitoring systems. This model has been derived and calibrated based on site structural condition monitoring (SCM) data and supervisory control and data acquisition systems (SCADA) data for two operational wind turbine generator substructures afflicted with this challenge, along with experimentally derived wear rates.

Keywords: grouted connection, numerical model, offshore structure, wear, wind energy

Procedia PDF Downloads 433
23537 Multimodal Deep Learning for Human Activity Recognition

Authors: Ons Slimene, Aroua Taamallah, Maha Khemaja

Abstract:

In recent years, human activity recognition (HAR) has been a key area of research due to its diverse applications. It has garnered increasing attention in the field of computer vision. HAR plays an important role in people’s daily lives as it has the ability to learn advanced knowledge about human activities from data. In HAR, activities are usually represented by exploiting different types of sensors, such as embedded sensors or visual sensors. However, these sensors have limitations, such as local obstacles, image-related obstacles, sensor unreliability, and consumer concerns. Recently, several deep learning-based approaches have been proposed for HAR and these approaches are classified into two categories based on the type of data used: vision-based approaches and sensor-based approaches. This research paper highlights the importance of multimodal data fusion from skeleton data obtained from videos and data generated by embedded sensors using deep neural networks for achieving HAR. We propose a deep multimodal fusion network based on a twostream architecture. These two streams use the Convolutional Neural Network combined with the Bidirectional LSTM (CNN BILSTM) to process skeleton data and data generated by embedded sensors and the fusion at the feature level is considered. The proposed model was evaluated on a public OPPORTUNITY++ dataset and produced a accuracy of 96.77%.

Keywords: human activity recognition, action recognition, sensors, vision, human-centric sensing, deep learning, context-awareness

Procedia PDF Downloads 76
23536 Impact of Foreign Trade on Economic Growth: A Panel Data Analysis for OECD Countries

Authors: Burcu Guvenek, Duygu Baysal Kurt

Abstract:

The impact of foreign trade on economic growth has been discussed since the Classical Economists. Today, foreign trade has become more important for the country's economy with the increasing globalization. When it comes to foreign trade, policies which may vary from country to country and from time to time as protectionism or free trade are implemented. In general, the positive effect of foreign trade on economic growth is alleged. However, as studies supporting this general acceptance take place in the economics literature, there are also studies in the opposite direction. In this paper, the impact of foreign trade on economic growth will be investigated with the help of panel data analysis. For this research, 24 OECD countries’ GDP and foreign trade data, including the period of 1990 and 2010, will be used.

Keywords: foreign trade, economic growth, OECD countries, panel data analysis

Procedia PDF Downloads 366
23535 Data-Driven Decision Making: A Reference Model for Organizational, Educational and Competency-Based Learning Systems

Authors: Emanuel Koseos

Abstract:

Data-Driven Decision Making (DDDM) refers to making decisions that are based on historical data in order to inform practice, develop strategies and implement policies that benefit organizational settings. In educational technology, DDDM facilitates the implementation of differential educational learning approaches such as Educational Data Mining (EDM) and Competency-Based Education (CBE), which commonly target university classrooms. There is a current need for DDDM models applied to middle and secondary schools from a concern for assessing the needs, progress and performance of students and educators with respect to regional standards, policies and evolution of curriculums. To address these concerns, we propose a DDDM reference model developed using educational key process initiatives as inputs to a machine learning framework implemented with statistical software (SAS, R) to provide a best-practices, complex-free and automated approach for educators at their regional level. We assessed the efficiency of the model over a six-year period using data from 45 schools and grades K-12 in the Langley, BC, Canada regional school district. We concluded that the model has wider appeal, such as business learning systems.

Keywords: competency-based learning, data-driven decision making, machine learning, secondary schools

Procedia PDF Downloads 153
23534 Data about Loggerhead Sea Turtle (Caretta caretta) and Green Turtle (Chelonia mydas) in Vlora Bay, Albania

Authors: Enerit Sacdanaku, Idriz Haxhiu

Abstract:

This study was conducted in the area of Vlora Bay, Albania. Data about Sea Turtles Caretta caretta and Chelonia mydas, belonging to two periods of time (1984–1991; 2008–2014) are given. All data gathered were analyzed using recent methodologies. For all turtles captured (as by catch), the Curve Carapace Length (CCL) and Curved Carapace Width (CCW) were measured. These data were statistically analyzed, where the mean was 67.11 cm for CCL and 57.57 cm for CCW of all individuals studied (n=13). All untagged individuals of marine turtles were tagged using metallic tags (Stockbrand’s titanium tag) with an Albanian address. Sex was determined and resulted that 45.4% of individuals were females, 27.3% males and 27.3% juveniles. All turtles were studied for the presence of the epibionts. The area of Vlora Bay is used from marine turtles (Caretta caretta) as a migratory corridor to pass from the Mediterranean to the northern part of the Adriatic Sea.

Keywords: Caretta caretta, Chelonia mydas, CCL, CCW, tagging, Vlora Bay

Procedia PDF Downloads 164
23533 Computational Modeling of Load Limits of Carbon Fibre Composite Laminates Subjected to Low-Velocity Impact Utilizing Convolution-Based Fast Fourier Data Filtering Algorithms

Authors: Farhat Imtiaz, Umar Farooq

Abstract:

In this work, we developed a computational model to predict ply level failure in impacted composite laminates. Data obtained from physical testing from flat and round nose impacts of 8-, 16-, 24-ply laminates were considered. Routine inspections of the tested laminates were carried out to approximate ply by ply inflicted damage incurred. Plots consisting of load–time, load–deflection, and energy–time history were drawn to approximate the inflicted damages. Impact test generated unwanted data logged due to restrictions on testing and logging systems were also filtered. Conventional filters (built-in, statistical, and numerical) reliably predicted load thresholds for relatively thin laminates such as eight and sixteen ply panels. However, for relatively thick laminates such as twenty-four ply laminates impacted by flat nose impact generated clipped data which can just be de-noised using oscillatory algorithms. The literature search reveals that modern oscillatory data filtering and extrapolation algorithms have scarcely been utilized. This investigation reports applications of filtering and extrapolation of the clipped data utilising fast Fourier Convolution algorithm to predict load thresholds. Some of the results were related to the impact-induced damage areas identified with Ultrasonic C-scans and found to be in acceptable agreement. Based on consistent findings, utilizing of modern data filtering and extrapolation algorithms to data logged by the existing machines has efficiently enhanced data interpretations without resorting to extra resources. The algorithms could be useful for impact-induced damage approximations of similar cases.

Keywords: fibre reinforced laminates, fast Fourier algorithms, mechanical testing, data filtering and extrapolation

Procedia PDF Downloads 119
23532 Design of Incident Information System in IoT Virtualization Platform

Authors: Amon Olimov, Umarov Jamshid, Dae-Ho Kim, Chol-U Lee, Ryum-Duck Oh

Abstract:

This paper proposes IoT virtualization platform based incident information system. IoT information based environment is the platform that was developed for the purpose of collecting a variety of data by managing regionally scattered IoT devices easily and conveniently in addition to analyzing data collected from roads. Moreover, this paper configured the platform for the purpose of providing incident information based on sensed data. It also provides the same input/output interface as UNIX and Linux by means of matching IoT devices with the directory of file system and also the files. In addition, it has a variety of approaches as to the devices. Thus, it can be applied to not only incident information but also other platforms. This paper proposes the incident information system that identifies and provides various data in real time as to urgent matters on roads based on the existing USN/M2M and IoT visualization platform.

Keywords: incident information system, IoT, virtualization platform, USN, M2M

Procedia PDF Downloads 333
23531 Social Network Analysis as a Research and Pedagogy Tool in Problem-Focused Undergraduate Social Innovation Courses

Authors: Sean McCarthy, Patrice M. Ludwig, Will Watson

Abstract:

This exploratory case study explores the deployment of Social Network Analysis (SNA) in mapping community assets in an interdisciplinary, undergraduate, team-taught course focused on income insecure populations in a rural area in the US. Specifically, it analyzes how students were taught to collect data on community assets and to visualize the connections between those assets using Kumu, an SNA data visualization tool. Further, the case study shows how social network data was also collected about student teams via their written communications in Slack, an enterprise messaging tool, which enabled instructors to manage and guide student research activity throughout the semester. The discussion presents how SNA methods can simultaneously inform both community-based research and social innovation pedagogy through the use of data visualization and collaboration-focused communication technologies.

Keywords: social innovation, social network analysis, pedagogy, problem-based learning, data visualization, information communication technologies

Procedia PDF Downloads 127
23530 Mobile Learning: Toward Better Understanding of Compression Techniques

Authors: Farouk Lawan Gambo

Abstract:

Data compression shrinks files into fewer bits then their original presentation. It has more advantage on internet because the smaller a file, the faster it can be transferred but learning most of the concepts in data compression are abstract in nature therefore making them difficult to digest by some students (Engineers in particular). To determine the best approach toward learning data compression technique, this paper first study the learning preference of engineering students who tend to have strong active, sensing, visual and sequential learning preferences, the paper also study the advantage that mobility of learning have experienced; Learning at the point of interest, efficiency, connection, and many more. A survey is carried out with some reasonable number of students, through random sampling to see whether considering the learning preference and advantages in mobility of learning will give a promising improvement over the traditional way of learning. Evidence from data analysis using Ms-Excel as a point of concern for error-free findings shows that there is significance different in the students after using learning content provided on smart phone, also the result of the findings presented in, bar charts and pie charts interpret that mobile learning has to be promising feature of learning.

Keywords: data analysis, compression techniques, learning content, traditional learning approach

Procedia PDF Downloads 332
23529 Human Immunodeficiency Virus (HIV) Test Predictive Modeling and Identify Determinants of HIV Testing for People with Age above Fourteen Years in Ethiopia Using Data Mining Techniques: EDHS 2011

Authors: S. Abera, T. Gidey, W. Terefe

Abstract:

Introduction: Testing for HIV is the key entry point to HIV prevention, treatment, and care and support services. Hence, predictive data mining techniques can greatly benefit to analyze and discover new patterns from huge datasets like that of EDHS 2011 data. Objectives: The objective of this study is to build a predictive modeling for HIV testing and identify determinants of HIV testing for adults with age above fourteen years using data mining techniques. Methods: Cross-Industry Standard Process for Data Mining (CRISP-DM) was used to predict the model for HIV testing and explore association rules between HIV testing and the selected attributes among adult Ethiopians. Decision tree, Naïve-Bayes, logistic regression and artificial neural networks of data mining techniques were used to build the predictive models. Results: The target dataset contained 30,625 study participants; of which 16, 515 (53.9%) were women. Nearly two-fifth; 17,719 (58%), have never been tested for HIV while the rest 12,906 (42%) had been tested. Ethiopians with higher wealth index, higher educational level, belonging 20 to 29 years old, having no stigmatizing attitude towards HIV positive person, urban residents, having HIV related knowledge, information about family planning on mass media and knowing a place where to get testing for HIV showed an increased patterns with respect to HIV testing. Conclusion and Recommendation: Public health interventions should consider the identified determinants to promote people to get testing for HIV.

Keywords: data mining, HIV, testing, ethiopia

Procedia PDF Downloads 472
23528 Assessing Flood Risk and Mapping Inundation Zones in the Kelantan River Basin: A Hydrodynamic Modeling Approach

Authors: Fatemehsadat Mortazavizadeh, Amin Dehghani, Majid Mirzaei, Nurulhuda Binti Mohammad Ramli, Adnan Dehghani

Abstract:

Flood is Malaysia's most common and serious natural disaster. Kelantan River Basin is a tropical basin that experiences a rainy season during North-East Monsoon from November to March. It is also one of the hardest hit areas in Peninsular Malaysia during the heavy monsoon rainfall. Considering the consequences of the flood events, it is essential to develop the flood inundation map as part of the mitigation approach. In this study, the delineation of flood inundation zone in the area of Kelantan River basin using a hydrodynamic model is done by HEC-RAS, QGIS and ArcMap. The streamflow data has been generated with the weather generator based on the observation data. Then, the data is statistically analyzed with the Extreme Value (EV1) method for 2-, 5-, 25-, 50- and 100-year return periods. The minimum depth, maximum depth, mean depth, and the standard deviation of all the scenarios, including the OBS, are observed and analyzed. Based on the results, generally, the value of the data increases with the return period for all the scenarios. However, there are certain scenarios that have different results, which not all the data obtained are increasing with the return period. Besides, OBS data resulted in the middle range within Scenario 1 to Scenario 40.

Keywords: flood inundation, kelantan river basin, hydrodynamic model, extreme value analysis

Procedia PDF Downloads 50
23527 The Use of Network Tool for Brain Signal Data Analysis: A Case Study with Blind and Sighted Individuals

Authors: Cleiton Pons Ferreira, Diana Francisca Adamatti

Abstract:

Advancements in computers technology have allowed to obtain information for research in biology and neuroscience. In order to transform the data from these surveys, networks have long been used to represent important biological processes, changing the use of this tools from purely illustrative and didactic to more analytic, even including interaction analysis and hypothesis formulation. Many studies have involved this application, but not directly for interpretation of data obtained from brain functions, asking for new perspectives of development in neuroinformatics using existent models of tools already disseminated by the bioinformatics. This study includes an analysis of neurological data through electroencephalogram (EEG) signals, using the Cytoscape, an open source software tool for visualizing complex networks in biological databases. The data were obtained from a comparative case study developed in a research from the University of Rio Grande (FURG), using the EEG signals from a Brain Computer Interface (BCI) with 32 eletrodes prepared in the brain of a blind and a sighted individuals during the execution of an activity that stimulated the spatial ability. This study intends to present results that lead to better ways for use and adapt techniques that support the data treatment of brain signals for elevate the understanding and learning in neuroscience.

Keywords: neuroinformatics, bioinformatics, network tools, brain mapping

Procedia PDF Downloads 153
23526 Analysis of the Impact of Climate Change on Maize (Zea Mays) Yield in Central Ethiopia

Authors: Takele Nemomsa, Girma Mamo, Tesfaye Balemi

Abstract:

Climate change refers to a change in the state of the climate that can be identified (e.g. using statistical tests) by changes in the mean and/or variance of its properties and that persists for an extended period, typically decades or longer. In Ethiopia; Maize production in relation to climate change at regional and sub- regional scales have not been studied in detail. Thus, this study was aimed to analyse the impact of climate change on maize yield in Ambo Districts, Central Ethiopia. To this effect, weather data, soil data and maize experimental data for Arganne hybrid were used. APSIM software was used to investigate the response of maize (Zea mays) yield to different agronomic management practices using current and future (2020s–2080s) climate data. The climate change projections data which were downscaled using SDSM were used as input of climate data for the impact analysis. Compared to agronomic practices the impact of climate change on Arganne in Central Ethiopia is minute. However, within 2020s-2080s in Ambo area; the yield of Arganne hybrid is projected to reduce by 1.06% to 2.02%, and in 2050s it is projected to reduce by 1.56 While in 2080s; it is projected to increase by 1.03% to 2.07%. Thus, to adapt to the changing climate; farmers should consider increasing plant density and fertilizer rate per hectare.

Keywords: APSIM, downscaling, response, SDSM

Procedia PDF Downloads 356
23525 Aerodynamic Modeling Using Flight Data at High Angle of Attack

Authors: Rakesh Kumar, A. K. Ghosh

Abstract:

The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.

Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling

Procedia PDF Downloads 420
23524 Big Data’s Mechanistic View of Human Behavior May Displace Traditional Library Missions That Empower Users

Authors: Gabriel Gomez

Abstract:

The very concept of information seeking behavior, and the means by which librarians teach users to gain information, that is information literacy, are at the heart of how libraries deliver information, but big data will forever change human interaction with information and the way such behavior is both studied and taught. Just as importantly, big data will orient the study of behavior towards commercial ends because of a tendency towards instrumentalist views of human behavior, something one might also call a trend towards behaviorism. This oral presentation seeks to explore how the impact of big data on understandings of human behavior might impact a library information science (LIS) view of human behavior and information literacy, and what this might mean for social justice aims and concomitant community action normally at the center of librarianship. The methodology employed here is a non-empirical examination of current understandings of LIS in regards to social justice alongside an examination of the benefits and dangers foreseen with the growth of big data analysis. The rise of big data within the ever-changing information environment encapsulates a shift to a more mechanistic view of human behavior, one that can easily encompass information seeking behavior and information use. As commercial aims displace the important political and ethical aims that are often central to the missions espoused by libraries and the social sciences, the very altruism and power relations found in LIS are at risk. In this oral presentation, an examination of the social justice impulses of librarians regarding power and information demonstrates how such impulses can be challenged by big data, particularly as librarians understand user behavior and promote information literacy. The creeping behaviorist impulse inherent in the emphasis big data places on specific solutions, that is answers to question that ask how, as opposed to larger questions that hint at an understanding of why people learn or use information threaten library information science ideals. Together with the commercial nature of most big data, this existential threat can harm the social justice nature of librarianship.

Keywords: big data, library information science, behaviorism, librarianship

Procedia PDF Downloads 362
23523 Data Collection with Bounded-Sized Messages in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In this paper, we study the data collection problem in Wireless Sensor Networks (WSNs) adopting the two interference models: The graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR). The main issue of the problem is to compute schedules with the minimum number of timeslots, that is, to compute the minimum latency schedules, such that data from every node can be collected without any collision or interference to a sink node. While existing works studied the problem with unit-sized and unbounded-sized message models, we investigate the problem with the bounded-sized message model, and introduce a constant factor approximation algorithm. To the best known of our knowledge, our result is the first result of the data collection problem with bounded-sized model in both interference models.

Keywords: data collection, collision-free, interference-free, physical interference model, SINR, approximation, bounded-sized message model, wireless sensor networks

Procedia PDF Downloads 200