Search results for: omics data analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 40995

Search results for: omics data analysis

40785 Data Collection Based on the Questionnaire Survey In-Hospital Emergencies

Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala

Abstract:

The methods identified in data collection are diverse: electronic media, focus group interviews and short-answer questionnaires [1]. The collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses [2]. In this context, we opted to collect good quality data by doing a sizeable questionnaire-based survey on hospital emergencies to improve emergency services and alleviate the problems encountered. At the level of this paper, we will present our study, and we will detail the steps followed to achieve the collection of relevant, consistent and practical data.

Keywords: data collection, survey, questionnaire, database, data analysis, hospital emergencies

Procedia PDF Downloads 90
40784 Disaster Resilience Analysis of Atlanta Interstate Highway System within the Perimeter

Authors: Mengmeng Liu, J. David Frost

Abstract:

Interstate highway system within the Atlanta Perimeter plays an important role in residents’ daily life. The serious influence of Atlanta I-85 Collapses implies that transportation system in the region lacks a cohesive and comprehensive transportation plan. Therefore, disaster resilience analysis of the transportation system is necessary. Resilience is the system’s capability to persist or to maintain transportation services when exposed to changes or shocks. This paper analyzed the resilience of the whole transportation system within the Perimeter and see how removing interstates within the Perimeter will affect the resilience of the transportation system. The data used in the paper are Atlanta transportation networks and LEHD Origin-Destination Employment Statistics data. First, we calculate the traffic flow on each road section based on LEHD data assuming each trip travel along the shortest travel time paths. Second, we calculate the measure of resilience, which is flow-based connectivity and centrality of the transportation network, and see how they will change if we remove each section of interstates from the current transportation system. Finally, we get the resilience function curve of the interstates and identify the most resilient interstates section. The resilience analysis results show that the framework of calculation resilience is effective and can provide some useful information for the transportation planning and sustainability analysis of the transportation infrastructures.

Keywords: connectivity, interstate highway system, network analysis, resilience analysis

Procedia PDF Downloads 237
40783 An Analysis of the Relation between Need for Psychological Help and Psychological Symptoms

Authors: İsmail Ay

Abstract:

In this study, it was aimed to determine the relations between need for psychological help and psychological symptoms. The sample of the study consists of 530 university students getting educated in University of Atatürk in 2015-2016 academic years. Need for Psychological Help Scale and Brief Symptom Inventory were used to collect data in the study. In data analysis, correlation analysis and structural equation model with latent variables were used. Normality and homogeneity analyses were used to analyze the basic conditions of parametric tests. The findings obtained from the study show that as the psychological symptoms increase, need for psychological help also increases. The findings obtained through the study were approached according to the literature.

Keywords: psychological symptoms, need for psychological help, structural equation model, correlation

Procedia PDF Downloads 350
40782 The Use of Network Tool for Brain Signal Data Analysis: A Case Study with Blind and Sighted Individuals

Authors: Cleiton Pons Ferreira, Diana Francisca Adamatti

Abstract:

Advancements in computers technology have allowed to obtain information for research in biology and neuroscience. In order to transform the data from these surveys, networks have long been used to represent important biological processes, changing the use of this tools from purely illustrative and didactic to more analytic, even including interaction analysis and hypothesis formulation. Many studies have involved this application, but not directly for interpretation of data obtained from brain functions, asking for new perspectives of development in neuroinformatics using existent models of tools already disseminated by the bioinformatics. This study includes an analysis of neurological data through electroencephalogram (EEG) signals, using the Cytoscape, an open source software tool for visualizing complex networks in biological databases. The data were obtained from a comparative case study developed in a research from the University of Rio Grande (FURG), using the EEG signals from a Brain Computer Interface (BCI) with 32 eletrodes prepared in the brain of a blind and a sighted individuals during the execution of an activity that stimulated the spatial ability. This study intends to present results that lead to better ways for use and adapt techniques that support the data treatment of brain signals for elevate the understanding and learning in neuroscience.

Keywords: neuroinformatics, bioinformatics, network tools, brain mapping

Procedia PDF Downloads 156
40781 Estimation of Desktop E-Wastes in Delhi Using Multivariate Flow Analysis

Authors: Sumay Bhojwani, Ashutosh Chandra, Mamita Devaburman, Akriti Bhogal

Abstract:

This article uses the Material flow analysis for estimating e-wastes in the Delhi/NCR region. The Material flow analysis is based on sales data obtained from various sources. Much of the data available for the sales is unreliable because of the existence of a huge informal sector. The informal sector in India accounts for more than 90%. Therefore, the scope of this study is only limited to the formal one. Also, for projection of the sales data till 2030, we have used regression (linear) to avoid complexity. The actual sales in the years following 2015 may vary non-linearly but we have assumed a basic linear relation. The purpose of this study was to know an approximate quantity of desktop e-wastes that we will have by the year 2030 so that we start preparing ourselves for the ineluctable investment in the treatment of these ever-rising e-wastes. The results of this study can be used to install a treatment plant for e-wastes in Delhi.

Keywords: e-wastes, Delhi, desktops, estimation

Procedia PDF Downloads 241
40780 Geospatial Network Analysis Using Particle Swarm Optimization

Authors: Varun Singh, Mainak Bandyopadhyay, Maharana Pratap Singh

Abstract:

The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.

Keywords: particle swarm optimization, GIS, traffic data, outliers

Procedia PDF Downloads 460
40779 The Comparison of Joint Simulation and Estimation Methods for the Geometallurgical Modeling

Authors: Farzaneh Khorram

Abstract:

This paper endeavors to construct a block model to assess grinding energy consumption (CCE) and pinpoint blocks with the highest potential for energy usage during the grinding process within a specified region. Leveraging geostatistical techniques, particularly joint estimation, or simulation, based on geometallurgical data from various mineral processing stages, our objective is to forecast CCE across the study area. The dataset encompasses variables obtained from 2754 drill samples and a block model comprising 4680 blocks. The initial analysis encompassed exploratory data examination, variography, multivariate analysis, and the delineation of geological and structural units. Subsequent analysis involved the assessment of contacts between these units and the estimation of CCE via cokriging, considering its correlation with SPI. The selection of blocks exhibiting maximum CCE holds paramount importance for cost estimation, production planning, and risk mitigation. The study conducted exploratory data analysis on lithology, rock type, and failure variables, revealing seamless boundaries between geometallurgical units. Simulation methods, such as Plurigaussian and Turning band, demonstrated more realistic outcomes compared to cokriging, owing to the inherent characteristics of geometallurgical data and the limitations of kriging methods.

Keywords: geometallurgy, multivariate analysis, plurigaussian, turning band method, cokriging

Procedia PDF Downloads 37
40778 Leveraging Unannotated Data to Improve Question Answering for French Contract Analysis

Authors: Touila Ahmed, Elie Louis, Hamza Gharbi

Abstract:

State of the art question answering models have recently shown impressive performance especially in a zero-shot setting. This approach is particularly useful when confronted with a highly diverse domain such as the legal field, in which it is increasingly difficult to have a dataset covering every notion and concept. In this work, we propose a flexible generative question answering approach to contract analysis as well as a weakly supervised procedure to leverage unannotated data and boost our models’ performance in general, and their zero-shot performance in particular.

Keywords: question answering, contract analysis, zero-shot, natural language processing, generative models, self-supervision

Procedia PDF Downloads 163
40777 Artificial Intelligence Assisted Sentiment Analysis of Hotel Reviews Using Topic Modeling

Authors: Sushma Ghogale

Abstract:

With a surge in user-generated content or feedback or reviews on the internet, it has become possible and important to know consumers' opinions about products and services. This data is important for both potential customers and businesses providing the services. Data from social media is attracting significant attention and has become the most prominent channel of expressing an unregulated opinion. Prospective customers look for reviews from experienced customers before deciding to buy a product or service. Several websites provide a platform for users to post their feedback for the provider and potential customers. However, the biggest challenge in analyzing such data is in extracting latent features and providing term-level analysis of the data. This paper proposes an approach to use topic modeling to classify the reviews into topics and conduct sentiment analysis to mine the opinions. This approach can analyse and classify latent topics mentioned by reviewers on business sites or review sites, or social media using topic modeling to identify the importance of each topic. It is followed by sentiment analysis to assess the satisfaction level of each topic. This approach provides a classification of hotel reviews using multiple machine learning techniques and comparing different classifiers to mine the opinions of user reviews through sentiment analysis. This experiment concludes that Multinomial Naïve Bayes classifier produces higher accuracy than other classifiers.

Keywords: latent Dirichlet allocation, topic modeling, text classification, sentiment analysis

Procedia PDF Downloads 87
40776 Assessment of Social Vulnerability of Urban Population to Floods – a Case Study of Mumbai

Authors: Sherly M. A., Varsha Vijaykumar, Subhankar Karmakar, Terence Chan, Christian Rau

Abstract:

This study aims at proposing an indicator-based framework for assessing social vulnerability of any coastal megacity to floods. The final set of indicators of social vulnerability are chosen from a set of feasible and available indicators which are prepared using a Geographic Information System (GIS) framework on a smaller scale considering 1-km grid cell to provide an insight into the spatial variability of vulnerability. The optimal weight for each individual indicator is assigned using data envelopment analysis (DEA) as it avoids subjective weights and improves the confidence on the results obtained. In order to de-correlate and reduce the dimension of multivariate data, principal component analysis (PCA) has been applied. The proposed methodology is demonstrated on twenty four wards of Mumbai under the jurisdiction of Municipal Corporation of Greater Mumbai (MCGM). This framework of vulnerability assessment is not limited to the present study area, and may be applied to other urban damage centers.

Keywords: urban floods, vulnerability, data envelopment analysis, principal component analysis

Procedia PDF Downloads 340
40775 Social Media Mining with R. Twitter Analyses

Authors: Diana Codat

Abstract:

Tweets' analysis is part of text mining. Each document is a written text. It's possible to apply the usual text search techniques, in particular by switching to the bag-of-words representation. But the tweets induce peculiarities. Some may enrich the analysis. Thus, their length is calibrated (at least as far as public messages are concerned), special characters make it possible to identify authors (@) and themes (#), the tweet and retweet mechanisms make it possible to follow the diffusion of the information. Conversely, other characteristics may disrupt the analyzes. Because space is limited, authors often use abbreviations, emoticons to express feelings, and they do not pay much attention to spelling. All this creates noise that can complicate the task. The tweets carry a lot of potentially interesting information. Their exploitation is one of the main axes of the analysis of the social networks. We show how to access Twitter-related messages. We will initiate a study of the properties of the tweets, and we will follow up on the exploitation of the content of the messages. We will work under R with the package 'twitteR'. The study of tweets is a strong focus of analysis of social networks because Twitter has become an important vector of communication. This example shows that it is easy to initiate an analysis from data extracted directly online. The data preparation phase is of great importance.

Keywords: data mining, language R, social networks, Twitter

Procedia PDF Downloads 159
40774 A Study of the Adaptive Reuse for School Land Use Strategy: An Application of the Analytic Network Process and Big Data

Authors: Wann-Ming Wey

Abstract:

In today's popularity and progress of information technology, the big data set and its analysis are no longer a major conundrum. Now, we could not only use the relevant big data to analysis and emulate the possible status of urban development in the near future, but also provide more comprehensive and reasonable policy implementation basis for government units or decision-makers via the analysis and emulation results as mentioned above. In this research, we set Taipei City as the research scope, and use the relevant big data variables (e.g., population, facility utilization and related social policy ratings) and Analytic Network Process (ANP) approach to implement in-depth research and discussion for the possible reduction of land use in primary and secondary schools of Taipei City. In addition to enhance the prosperous urban activities for the urban public facility utilization, the final results of this research could help improve the efficiency of urban land use in the future. Furthermore, the assessment model and research framework established in this research also provide a good reference for schools or other public facilities land use and adaptive reuse strategies in the future.

Keywords: adaptive reuse, analytic network process, big data, land use strategy

Procedia PDF Downloads 186
40773 The Maximum Throughput Analysis of UAV Datalink 802.11b Protocol

Authors: Inkyu Kim, SangMan Moon

Abstract:

This IEEE 802.11b protocol provides up to 11Mbps data rate, whereas aerospace industry wants to seek higher data rate COTS data link system in the UAV. The Total Maximum Throughput (TMT) and delay time are studied on many researchers in the past years This paper provides theoretical data throughput performance of UAV formation flight data link using the existing 802.11b performance theory. We operate the UAV formation flight with more than 30 quad copters with 802.11b protocol. We may be predicting that UAV formation flight numbers have to bound data link protocol performance limitations.

Keywords: UAV datalink, UAV formation flight datalink, UAV WLAN datalink application, UAV IEEE 802.11b datalink application

Procedia PDF Downloads 370
40772 Stability of the Wellhead in the Seabed in One of the Marine Reservoirs of Iran

Authors: Mahdi Aghaei, Saeid Jamshidi, Mastaneh Hajipour

Abstract:

Effective factors on the mechanical wellbore stability are divided in to two categories: 1) Controllable factors, 2) Uncontrollable factors. The purpose of geo-mechanical modeling of wells is to determine the limit of controlled parameters change based on the stress regime at each point and by solving the governing equations the pore-elastic environment around the well. In this research, the mechanical analysis of wellbore stability was carried out for Soroush oilfield. For this purpose, the geo-mechanical model of the field is made using available data. This model provides the necessary parameters for obtaining the distribution of stress around the wellbore. Initially, a basic model was designed to perform various analysis, based on obtained data, using Abaqus software. All of the subsequent sensitivity analysis such as sensitivity analysis on porosity, permeability, etc. was done on the same basic model. The results obtained from these analysis gives various result such as: with the constant geomechanical parameters, and sensitivity analysis on porosity permeability is ineffective. After the most important parameters affecting the wellbore stability and instability are geo-mechanical parameters.

Keywords: wellbore stability, movement, stress, instability

Procedia PDF Downloads 189
40771 Using Risk Management Indicators in Decision Tree Analysis

Authors: Adel Ali Elshaibani

Abstract:

Risk management indicators augment the reporting infrastructure, particularly for the board and senior management, to identify, monitor, and manage risks. This enhancement facilitates improved decision-making throughout the banking organization. Decision tree analysis is a tool that visually outlines potential outcomes, costs, and consequences of complex decisions. It is particularly beneficial for analyzing quantitative data and making decisions based on numerical values. By calculating the expected value of each outcome, decision tree analysis can help assess the best course of action. In the context of banking, decision tree analysis can assist lenders in evaluating a customer’s creditworthiness, thereby preventing losses. However, applying these tools in developing countries may face several limitations, such as data availability, lack of technological infrastructure and resources, lack of skilled professionals, cultural factors, and cost. Moreover, decision trees can create overly complex models that do not generalize well to new data, known as overfitting. They can also be sensitive to small changes in the data, which can result in different tree structures and can become computationally expensive when dealing with large datasets. In conclusion, while risk management indicators and decision tree analysis are beneficial for decision-making in banks, their effectiveness is contingent upon how they are implemented and utilized by the board of directors, especially in the context of developing countries. It’s important to consider these limitations when planning to implement these tools in developing countries.

Keywords: risk management indicators, decision tree analysis, developing countries, board of directors, bank performance, risk management strategy, banking institutions

Procedia PDF Downloads 41
40770 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 295
40769 Various Advanced Statistical Analyses of Index Values Extracted from Outdoor Agricultural Workers Motion Data

Authors: Shinji Kawakura, Ryosuke Shibasaki

Abstract:

We have been grouping and developing various kinds of practical, promising sensing applied systems concerning agricultural advancement and technical tradition (guidance). These include advanced devices to secure real-time data related to worker motion, and we analyze by methods of various advanced statistics and human dynamics (e.g. primary component analysis, Ward system based cluster analysis, and mapping). What is more, we have been considering worker daily health and safety issues. Targeted fields are mainly common farms, meadows, and gardens. After then, we observed and discussed time-line style, changing data. And, we made some suggestions. The entire plan makes it possible to improve both the aforementioned applied systems and farms.

Keywords: advanced statistical analysis, wearable sensing system, tradition of skill, supporting for workers, detecting crisis

Procedia PDF Downloads 378
40768 Time Series Analysis on the Production of Fruit Juice: A Case Study of National Horticultural Research Institute (Nihort) Ibadan, Oyo State

Authors: Abiodun Ayodele Sanyaolu

Abstract:

The research was carried out to investigate the time series analysis on quarterly production of fruit juice at the National Horticultural Research Institute Ibadan from 2010 to 2018. Documentary method of data collection was used, and the method of least square and moving average were used in the analysis. From the calculation and the graph, it was glaring that there was increase, decrease, and uniform movements in both the graph of the original data and the tabulated quarter values of the original data. Time series analysis was used to detect the trend in the highest number of fruit juice and it appears to be good over a period of time and the methods used to forecast are additive and multiplicative models. Since it was observed that the production of fruit juice is usually high in January of every year, it is strongly advised that National Horticultural Research Institute should make more provision for fruit juice storage outside this period of the year.

Keywords: fruit juice, least square, multiplicative models, time series

Procedia PDF Downloads 127
40767 Developing Structured Sizing Systems for Manufacturing Ready-Made Garments of Indian Females Using Decision Tree-Based Data Mining

Authors: Hina Kausher, Sangita Srivastava

Abstract:

In India, there is a lack of standard, systematic sizing approach for producing readymade garments. Garments manufacturing companies use their own created size tables by modifying international sizing charts of ready-made garments. The purpose of this study is to tabulate the anthropometric data which covers the variety of figure proportions in both height and girth. 3,000 data has been collected by an anthropometric survey undertaken over females between the ages of 16 to 80 years from some states of India to produce the sizing system suitable for clothing manufacture and retailing. This data is used for the statistical analysis of body measurements, the formulation of sizing systems and body measurements tables. Factor analysis technique is used to filter the control body dimensions from a large number of variables. Decision tree-based data mining is used to cluster the data. The standard and structured sizing system can facilitate pattern grading and garment production. Moreover, it can exceed buying ratios and upgrade size allocations to retail segments.

Keywords: anthropometric data, data mining, decision tree, garments manufacturing, sizing systems, ready-made garments

Procedia PDF Downloads 119
40766 Scientific Linux Cluster for BIG-DATA Analysis (SLBD): A Case of Fayoum University

Authors: Hassan S. Hussein, Rania A. Abul Seoud, Amr M. Refaat

Abstract:

Scientific researchers face in the analysis of very large data sets that is increasing noticeable rate in today’s and tomorrow’s technologies. Hadoop and Spark are types of software that developed frameworks. Hadoop framework is suitable for many Different hardware platforms. In this research, a scientific Linux cluster for Big Data analysis (SLBD) is presented. SLBD runs open source software with large computational capacity and high performance cluster infrastructure. SLBD composed of one cluster contains identical, commodity-grade computers interconnected via a small LAN. SLBD consists of a fast switch and Gigabit-Ethernet card which connect four (nodes). Cloudera Manager is used to configure and manage an Apache Hadoop stack. Hadoop is a framework allows storing and processing big data across the cluster by using MapReduce algorithm. MapReduce algorithm divides the task into smaller tasks which to be assigned to the network nodes. Algorithm then collects the results and form the final result dataset. SLBD clustering system allows fast and efficient processing of large amount of data resulting from different applications. SLBD also provides high performance, high throughput, high availability, expandability and cluster scalability.

Keywords: big data platforms, cloudera manager, Hadoop, MapReduce

Procedia PDF Downloads 342
40765 The Role of Environmental Analysis in Managing Knowledge in Small and Medium Sized Enterprises

Authors: Liu Yao, B. T. Wan Maseri, Wan Mohd, B. T. Nurul Izzah, Mohd Shah, Wei Wei

Abstract:

Effectively managing knowledge has become a vital weapon for businesses to survive or to succeed in the increasingly competitive market. But do they perform environmental analysis when managing knowledge? If yes, how is the level and significance? This paper established a conceptual framework covering the basic knowledge management activities (KMA) to examine their contribution towards organizational performance (OP). Environmental analysis (EA) was then investigated from both internal and external aspects, to identify its effects on that contribution. Data was collected from 400 Chinese SMEs by questionnaires. Cronbach's α and factor analysis were conducted. Regression results show that the external analysis presents higher level than internal analysis. However, the internal analysis mediates the effects of external analysis on the KMA-OP relation and plays more significant role in the relation comparing with the external analysis. Thus, firms shall improve environmental analysis especially the internal analysis to enhance their KM practices.

Keywords: knowledge management, environmental analysis, performance, mediating, small sized enterprises, medium sized enterprises

Procedia PDF Downloads 597
40764 Parabolic Impact Law of High Frequency Exchanges on Price Formation in Commodities Market

Authors: L. Maiza, A. Cantagrel, M. Forestier, G. Laucoin, T. Regali

Abstract:

Evaluation of High Frequency Trading (HFT) impact on financial markets is very important for traders who use market analysis to detect winning transaction opportunity. Analysis of HFT data on tobacco commodity market is discussed here and interesting linear relationship has been shown between trading frequency and difference between averaged trading prices above and below considered trading frequency. This may open new perspectives on markets data understanding and could provide possible interpretation of Adam Smith invisible hand.

Keywords: financial market, high frequency trading, analysis, impacts, Adam Smith invisible hand

Procedia PDF Downloads 340
40763 Mining Multicity Urban Data for Sustainable Population Relocation

Authors: Xu Du, Aparna S. Varde

Abstract:

In this research, we propose to conduct diagnostic and predictive analysis about the key factors and consequences of urban population relocation. To achieve this goal, urban simulation models extract the urban development trends as land use change patterns from a variety of data sources. The results are treated as part of urban big data with other information such as population change and economic conditions. Multiple data mining methods are deployed on this data to analyze nonlinear relationships between parameters. The result determines the driving force of population relocation with respect to urban sprawl and urban sustainability and their related parameters. Experiments so far reveal that data mining methods discover useful knowledge from the multicity urban data. This work sets the stage for developing a comprehensive urban simulation model for catering to specific questions by targeted users. It contributes towards achieving sustainability as a whole.

Keywords: data mining, environmental modeling, sustainability, urban planning

Procedia PDF Downloads 281
40762 Imputation of Urban Movement Patterns Using Big Data

Authors: Eusebio Odiari, Mark Birkin, Susan Grant-Muller, Nicolas Malleson

Abstract:

Big data typically refers to consumer datasets revealing some detailed heterogeneity in human behavior, which if harnessed appropriately, could potentially revolutionize our understanding of the collective phenomena of the physical world. Inadvertent missing values skew these datasets and compromise the validity of the thesis. Here we discuss a conceptually consistent strategy for identifying other relevant datasets to combine with available big data, to plug the gaps and to create a rich requisite comprehensive dataset for subsequent analysis. Specifically, emphasis is on how these methodologies can for the first time enable the construction of more detailed pictures of passenger demand and drivers of mobility on the railways. These methodologies can predict the influence of changes within the network (like a change in time-table or impact of a new station), explain local phenomena outside the network (like rail-heading) and the other impacts of urban morphology. Our analysis also reveals that our new imputation data model provides for more equitable revenue sharing amongst network operators who manage different parts of the integrated UK railways.

Keywords: big-data, micro-simulation, mobility, ticketing-data, commuters, transport, synthetic, population

Procedia PDF Downloads 210
40761 Synoptic Analysis of a Heavy Flood in the Province of Sistan-Va-Balouchestan: Iran January 2020

Authors: N. Pegahfar, P. Ghafarian

Abstract:

In this research, the synoptic weather conditions during the heavy flood of 10-12 January 2020 in the Sistan-va-Balouchestan Province of Iran will be analyzed. To this aim, reanalysis data from the National Centers for Environmental Prediction (NCEP) and National Center for Atmospheric Research (NCAR), NCEP Global Forecasting System (GFS) analysis data, measured data from a surface station together with satellite images from the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) have been used from 9 to 12 January 2020. Atmospheric parameters both at the lower troposphere and also at the upper part of that have been used, including absolute vorticity, wind velocity, temperature, geopotential height, relative humidity, and precipitation. Results indicated that both lower-level and upper-level currents were strong. In addition, the transport of a large amount of humidity from the Oman Sea and the Red Sea to the south and southeast of Iran (Sistan-va-Balouchestan Province) led to the vast and unexpected precipitation and then a heavy flood.

Keywords: Sistan-va-Balouchestn Province, heavy flood, synoptic, analysis data

Procedia PDF Downloads 83
40760 Comparative Study of Greenhouse Locations through Satellite Images and Geographic Information System: Methodological Evaluation in Venezuela

Authors: Maria A. Castillo H., Andrés R. Leandro C.

Abstract:

During the last decades, agricultural productivity in Latin America has increased with precision agriculture and more efficient agricultural technologies. The use of automated systems, satellite images, geographic information systems, and tools for data analysis, and artificial intelligence have contributed to making more effective strategic decisions. Twenty years ago, the state of Mérida, located in the Venezuelan Andes, reported the largest area covered by greenhouses in the country, where certified seeds of potatoes, vegetables, ornamentals, and flowers were produced for export and consumption in the central region of the country. In recent years, it is estimated that production under greenhouses has changed, and the area covered has decreased due to different factors, but there are few historical statistical data in sufficient quantity and quality to support this estimate or to be used for analysis and decision making. The objective of this study is to compare data collected about geoposition, use, and covered areas of the greenhouses in 2007 to data available in 2021, as support for the analysis of the current situation of horticultural production in the main municipalities of the state of Mérida. The document presents the development of the work in the diagnosis and integration of geographic coordinates in GIS and data analysis phases. As a result, an evaluation of the process is made, a dashboard is presented with the most relevant data along with the geographical coordinates integrated into GIS, and an analysis of the obtained information is made. Finally, some recommendations for actions are added, and works that expand the information obtained and its geographical traceability over time are proposed. This study contributes to granting greater certainty in the supporting data for the evaluation of social, environmental, and economic sustainability indicators and to make better decisions according to the sustainable development goals in the area under review. At the same time, the methodology provides improvements to the agricultural data collection process that can be extended to other study areas and crops.

Keywords: greenhouses, geographic information system, protected agriculture, data analysis, Venezuela

Procedia PDF Downloads 80
40759 The Effect of Institutions on Economic Growth: An Analysis Based on Bayesian Panel Data Estimation

Authors: Mohammad Anwar, Shah Waliullah

Abstract:

This study investigated panel data regression models. This paper used Bayesian and classical methods to study the impact of institutions on economic growth from data (1990-2014), especially in developing countries. Under the classical and Bayesian methodology, the two-panel data models were estimated, which are common effects and fixed effects. For the Bayesian approach, the prior information is used in this paper, and normal gamma prior is used for the panel data models. The analysis was done through WinBUGS14 software. The estimated results of the study showed that panel data models are valid models in Bayesian methodology. In the Bayesian approach, the effects of all independent variables were positively and significantly affected by the dependent variables. Based on the standard errors of all models, we must say that the fixed effect model is the best model in the Bayesian estimation of panel data models. Also, it was proved that the fixed effect model has the lowest value of standard error, as compared to other models.

Keywords: Bayesian approach, common effect, fixed effect, random effect, Dynamic Random Effect Model

Procedia PDF Downloads 58
40758 Doing Cause-and-Effect Analysis Using an Innovative Chat-Based Focus Group Method

Authors: Timothy Whitehill

Abstract:

This paper presents an innovative chat-based focus group method for collecting qualitative data to construct a cause-and-effect analysis in business research. This method was developed in response to the research and data collection challenges faced by the Covid-19 outbreak in the United Kingdom during 2020-21. This paper discusses the methodological approaches and builds a contemporary argument for its effectiveness in exploring cause-and-effect relationships in the context of focus group research, systems thinking and problem structuring methods. The pilot for this method was conducted between October 2020 and March 2021 and collected more than 7,000 words of chat-based data which was used to construct a consensus drawn cause-and-effect analysis. This method was developed in support of an ongoing Doctorate in Business Administration (DBA) thesis, which is using Design Science Research methodology to operationalize organisational resilience in UK construction sector firms.

Keywords: cause-and-effect analysis, focus group research, problem structuring methods, qualitative research, systems thinking

Procedia PDF Downloads 205
40757 Hierarchical Filtering Method of Threat Alerts Based on Correlation Analysis

Authors: Xudong He, Jian Wang, Jiqiang Liu, Lei Han, Yang Yu, Shaohua Lv

Abstract:

Nowadays, the threats of the internet are enormous and increasing; however, the classification of huge alert messages generated in this environment is relatively monotonous. It affects the accuracy of the network situation assessment, and also brings inconvenience to the security managers to deal with the emergency. In order to deal with potential network threats effectively and provide more effective data to improve the network situation awareness. It is essential to build a hierarchical filtering method to prevent the threats. In this paper, it establishes a model for data monitoring, which can filter systematically from the original data to get the grade of threats and be stored for using again. Firstly, it filters the vulnerable resources, open ports of host devices and services. Then use the entropy theory to calculate the performance changes of the host devices at the time of the threat occurring and filter again. At last, sort the changes of the performance value at the time of threat occurring. Use the alerts and performance data collected in the real network environment to evaluate and analyze. The comparative experimental analysis shows that the threat filtering method can effectively filter the threat alerts effectively.

Keywords: correlation analysis, hierarchical filtering, multisource data, network security

Procedia PDF Downloads 181
40756 Application of Artificial Neural Network Technique for Diagnosing Asthma

Authors: Azadeh Bashiri

Abstract:

Introduction: Lack of proper diagnosis and inadequate treatment of asthma leads to physical and financial complications. This study aimed to use data mining techniques and creating a neural network intelligent system for diagnosis of asthma. Methods: The study population is the patients who had visited one of the Lung Clinics in Tehran. Data were analyzed using the SPSS statistical tool and the chi-square Pearson's coefficient was the basis of decision making for data ranking. The considered neural network is trained using back propagation learning technique. Results: According to the analysis performed by means of SPSS to select the top factors, 13 effective factors were selected, in different performances, data was mixed in various forms, so the different models were made for training the data and testing networks and in all different modes, the network was able to predict correctly 100% of all cases. Conclusion: Using data mining methods before the design structure of system, aimed to reduce the data dimension and the optimum choice of the data, will lead to a more accurate system. Therefore, considering the data mining approaches due to the nature of medical data is necessary.

Keywords: asthma, data mining, Artificial Neural Network, intelligent system

Procedia PDF Downloads 256