Search results for: terrorism data analysis
41833 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data
Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin
Abstract:
Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.Keywords: apartment complex, big data, life-cycle building value analysis, machine learning
Procedia PDF Downloads 37041832 A Comparation Analysis of Islamic Bank Efficiency in the United Kingdom and Indonesia during Eurozone Crisis Using Data Envelopment Analysis
Authors: Nisful Laila, Fatin Fadhilah Hasib, Puji Sucia Sukmaningrum, Achsania Hendratmi
Abstract:
The purpose of this study is to determine and comparing the level of efficiency of Islamic Banks in Indonesia and United Kingdom during eurozone sovereign debt crisis. This study using a quantitative non-parametric approach with Data Envelopment Analysis (DEA) VRS assumption, and a statistical tool Mann-Whitney U-Test. The samples are 11 Islamic Banks in Indonesia and 4 Islamic Banks in England. This research used mediating approach. Input variable consists of total deposit, asset, and the cost of labour. Output variable consists of financing and profit/loss. This study shows that the efficiency of Islamic Bank in Indonesia and United Kingdom are varied and fluctuated during the observation period. There is no significant different the efficiency performance of Islamic Banks in Indonesia and United Kingdom.Keywords: data envelopment analysis, efficiency, eurozone crisis, islamic bank
Procedia PDF Downloads 32541831 The Trend of Injuries in Building Fire in Tehran from 2002 to 2012
Authors: Mohammadreza Ashouri, Majid Bayatian
Abstract:
Analysis of fire data is a way for the implementation of any plan to improve the level of safety in cities. Such an analysis is able to reveal signs of changes in a given period and can be used as a measure of safety. The information of about 66,341 fires (from 2002 to 2012) released by Tehran Safety Services and Fire-Fighting Organization and data on the population and the number of households provided by Tehran Municipality and the Statistical Yearbook of Iran were extracted. Using the data, the fire changes, the rate of injuries, and mortality rate were determined and analyzed. The rate of injuries and mortality rate of fires per one million population of Tehran were 59.58% and 86.12%, respectively. During the study period, the number of fires and fire stations increased by 104.38% and 102.63%, respectively. Most fires (9.21%) happened in the 4th District of Tehran. The results showed that the recorded fire data have not been systematically planned for fire prevention since one of the ways to reduce injuries caused by fires is to develop a systematic plan for necessary actions in emergency situations. To determine a reliable source for fire prevention, the stages, definitions of working processes and the cause and effect chains should be considered. Therefore, a comprehensive statistical system should be developed for reported and recorded fire data.Keywords: fire statistics, fire analysis, accident prevention, Tehran
Procedia PDF Downloads 18241830 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis
Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi
Abstract:
The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH researchKeywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis
Procedia PDF Downloads 8841829 Coverage Probability Analysis of WiMAX Network under Additive White Gaussian Noise and Predicted Empirical Path Loss Model
Authors: Chaudhuri Manoj Kumar Swain, Susmita Das
Abstract:
This paper explores a detailed procedure of predicting a path loss (PL) model and its application in estimating the coverage probability in a WiMAX network. For this a hybrid approach is followed in predicting an empirical PL model of a 2.65 GHz WiMAX network deployed in a suburban environment. Data collection, statistical analysis, and regression analysis are the phases of operations incorporated in this approach and the importance of each of these phases has been discussed properly. The procedure of collecting data such as received signal strength indicator (RSSI) through experimental set up is demonstrated. From the collected data set, empirical PL and RSSI models are predicted with regression technique. Furthermore, with the aid of the predicted PL model, essential parameters such as PL exponent as well as the coverage probability of the network are evaluated. This research work may assist in the process of deployment and optimisation of any cellular network significantly.Keywords: WiMAX, RSSI, path loss, coverage probability, regression analysis
Procedia PDF Downloads 17341828 Urban Change Detection and Pattern Analysis Using Satellite Data
Authors: Shivani Jha, Klaus Baier, Rafiq Azzam, Ramakar Jha
Abstract:
In India, generally people migrate from rural area to the urban area for better infra-structural facilities, high standard of living, good job opportunities and advanced transport/communication availability. In fact, unplanned urban development due to migration of people causes seriou damage to the land use, water pollution and available water resources. In the present work, an attempt has been made to use satellite data of different years for urban change detection of Chennai metropolitan city along with pattern analysis to generate future scenario of urban development using buffer zoning in GIS environment. In the analysis, SRTM (30m) elevation data and IRS-1C satellite data for the years 1990, 2000, and 2014, are used. The flow accumulation, aspect, flow direction and slope maps developed using SRTM 30 m data are very useful for finding suitable urban locations for industrial setup and urban settlements. Normalized difference vegetation index (NDVI) and Principal Component Analysis (PCA) have been used in ERDAS imagine software for change detection in land use of Chennai metropolitan city. It has been observed that the urban area has increased exponentially in Chennai metropolitan city with significant decrease in agriculture and barren lands. However, the water bodies located in the study regions are protected and being used as freshwater for drinking purposes. Using buffer zone analysis in GIS environment, it has been observed that the development has taken place in south west direction significantly and will do so in future.Keywords: urban change, satellite data, the Chennai metropolis, change detection
Procedia PDF Downloads 40741827 Analysis of Bored Piles with and without Geogrid in a Selected Area in Kocaeli/Turkey
Authors: Utkan Mutman, Cihan Dirlik
Abstract:
Kocaeli/TURKEY district in which wastewater held in a chosen field increased property has made piling in order to improve the ground under the aeration basin. In this study, the degree of improvement the ground after bored piling held in the field were investigated. In this context, improving the ground before and after the investigation was carried out and that the solution values obtained by the finite element method analysis using Plaxis program have been made. The diffuses in the aeration basin whose treatment is to aide is influenced with and without geogrid on the ground. On the ground been improved, for the purpose of control of manufactured bored piles, pile continuity, and pile load tests were made. Taking into consideration both the data in the field as well as dynamic loads in the aeration basic, an analysis was made on Plaxis program and compared the data obtained from the analysis result and data obtained in the field.Keywords: geogrid, bored pile, soil improvement, plaxis
Procedia PDF Downloads 26541826 Data Collection in Protected Agriculture for Subsequent Big Data Analysis: Methodological Evaluation in Venezuela
Authors: Maria Antonieta Erna Castillo Holly
Abstract:
During the last decade, data analysis, strategic decision making, and the use of artificial intelligence (AI) tools in Latin American agriculture have been a challenge. In some countries, the availability, quality, and reliability of historical data, in addition to the current data recording methodology in the field, makes it difficult to use information systems, complete data analysis, and their support for making the right strategic decisions. This is something essential in Agriculture 4.0. where the increase in the global demand for fresh agricultural products of tropical origin, during all the seasons of the year requires a change in the production model and greater agility in the responses to the consumer market demands of quality, quantity, traceability, and sustainability –that means extensive data-. Having quality information available and updated in real-time on what, how much, how, when, where, at what cost, and the compliance with production quality standards represents the greatest challenge for sustainable and profitable agriculture in the region. The objective of this work is to present a methodological proposal for the collection of georeferenced data from the protected agriculture sector, specifically in production units (UP) with tall structures (Greenhouses), initially for Venezuela, taking the state of Mérida as the geographical framework, and horticultural products as target crops. The document presents some background information and explains the methodology and tools used in the 3 phases of the work: diagnosis, data collection, and analysis. As a result, an evaluation of the process is carried out, relevant data and dashboards are displayed, and the first satellite maps integrated with layers of information in a geographic information system are presented. Finally, some improvement proposals and tentatively recommended applications are added to the process, understanding that their objective is to provide better qualified and traceable georeferenced data for subsequent analysis of the information and more agile and accurate strategic decision making. One of the main points of this study is the lack of quality data treatment in the Latin America area and especially in the Caribbean basin, being one of the most important points how to manage the lack of complete official data. The methodology has been tested with horticultural products, but it can be extended to other tropical crops.Keywords: greenhouses, protected agriculture, data analysis, geographic information systems, Venezuela
Procedia PDF Downloads 13041825 Applications of Big Data in Education
Authors: Faisal Kalota
Abstract:
Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners’ needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in education. Additionally, it discusses some of the concerns related to Big Data and current research trends. While Big Data can provide big benefits, it is important that institutions understand their own needs, infrastructure, resources, and limitation before jumping on the Big Data bandwagon.Keywords: big data, learning analytics, analytics, big data in education, Hadoop
Procedia PDF Downloads 42341824 The Analysis of Emergency Shutdown Valves Torque Data in Terms of Its Use as a Health Indicator for System Prognostics
Authors: Ewa M. Laskowska, Jorn Vatn
Abstract:
Industry 4.0 focuses on digital optimization of industrial processes. The idea is to use extracted data in order to build a decision support model enabling use of those data for real time decision making. In terms of predictive maintenance, the desired decision support tool would be a model enabling prognostics of system's health based on the current condition of considered equipment. Within area of system prognostics and health management, a commonly used health indicator is Remaining Useful Lifetime (RUL) of a system. Because the RUL is a random variable, it has to be estimated based on available health indicators. Health indicators can be of different types and come from different sources. They can be process variables, equipment performance variables, data related to number of experienced failures, etc. The aim of this study is the analysis of performance variables of emergency shutdown valves (ESV) used in oil and gas industry. ESV is inspected periodically, and at each inspection torque and time of valve operation are registered. The data will be analyzed by means of machine learning or statistical analysis. The purpose is to investigate whether the available data could be used as a health indicator for a prognostic purpose. The second objective is to examine what is the most efficient way to incorporate the data into predictive model. The idea is to check whether the data can be applied in form of explanatory variables in Markov process or whether other stochastic processes would be a more convenient to build an RUL model based on the information coming from registered data.Keywords: emergency shutdown valves, health indicator, prognostics, remaining useful lifetime, RUL
Procedia PDF Downloads 9041823 A CFD Analysis of Hydraulic Characteristics of the Rod Bundles in the BREST-OD-300 Wire-Spaced Fuel Assemblies
Authors: Dmitry V. Fomichev, Vladimir V. Solonin
Abstract:
This paper presents the findings from a numerical simulation of the flow in 37-rod fuel assembly models spaced by a double-wire trapezoidal wrapping as applied to the BREST-OD-300 experimental nuclear reactor. Data on a high static pressure distribution within the models, and equations for determining the fuel bundle flow friction factors have been obtained. Recommendations are provided on using the closing turbulence models available in the ANSYS Fluent. A comparative analysis has been performed against the existing empirical equations for determining the flow friction factors. The calculated and experimental data fit has been shown. An analysis into the experimental data and results of the numerical simulation of the BREST-OD-300 fuel rod assembly hydrodynamic performance are presented.Keywords: BREST-OD-300, ware-spaces, fuel assembly, computation fluid dynamics
Procedia PDF Downloads 38041822 Corporate Governance and Share Prices: Firm Level Review in Turkey
Authors: Raif Parlakkaya, Ahmet Diken, Erkan Kara
Abstract:
This paper examines the relationship between corporate governance rating and stock prices of 26 Turkish firms listed in Turkish stock exchange (Borsa Istanbul) by using panel data analysis over five-year period. The paper also investigates the stock performance of firms with governance rating with regards to the market portfolio (i.e. BIST 100 Index) both prior and after governance scoring began. The empirical results show that there is no relation between corporate governance rating and stock prices when using panel data for annual variation in both rating score and stock prices. Further analysis indicates surprising results that while the selected firms outperform the market significantly prior to rating, the same performance does not continue afterwards.Keywords: corporate governance, stock price, performance, panel data analysis
Procedia PDF Downloads 39041821 Data Collection Based on the Questionnaire Survey In-Hospital Emergencies
Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala
Abstract:
The methods identified in data collection are diverse: electronic media, focus group interviews and short-answer questionnaires [1]. The collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses [2]. In this context, we opted to collect good quality data by doing a sizeable questionnaire-based survey on hospital emergencies to improve emergency services and alleviate the problems encountered. At the level of this paper, we will present our study, and we will detail the steps followed to achieve the collection of relevant, consistent and practical data.Keywords: data collection, survey, questionnaire, database, data analysis, hospital emergencies
Procedia PDF Downloads 10641820 Corruption, a Prelude to Problems of Governance in Pakistan
Authors: Umbreen Javaid
Abstract:
Pakistan’s experience with nascent, yet to be evolved democratic institutions inherited from the British Empire, has not been a pleasant one when evaluated in terms of good governance, development, and success of anti-corruption mechanisms. The country has remained entangled in a vicious circle of accumulating large budget deficits, dwindling economy, low foreign direct investment, political instability, and rising terrorism. It is thus not surprising that no account of the state aimed at analyzing the six-decade journey since her inception is replete with negative connotations like dysfunctional, failed, fragile or weak state. The limited pool of experience of handling democratic institutions and lack of political will be on the part of country’s political elite to transform the society on democratic footings have left Pakistan as a “limited access order” state. The widespread illiteracy becomes a double edge sword when a largely illiterate electorate elects representatives who mostly come from a semi-educated background with the limited understanding of democratic minutiae and little or no proclivity to resist monetary allures. The prevalence of culture of patronage with widespread poverty coupled with absence of a comprehensive system of investigating, prosecuting and adjudicating cases of corruption encourage the practice that has been eroding the state’s foundations since her inception owing to the unwillingness of the traditional elites who have been strongly resistant towards any attempts aimed at disseminating powers. An analytical study of the historical, political, cultural, economic and administrative hurdles that have been at work in impeding Pakistan’s transition to a democratic, accountable society would be instrumental in understanding the issue of widespread plague of corruption and state’s inefficiency to cope with it effectively. The issue of corruption in Pakistan becomes more important when seen in the context of her vulnerability to terrorism and religious extremism. In this regard, Pakistan needs to learn a lot from developed countries in order to evolve a comprehensive strategy for combating and preventing this pressing issue.Keywords: Pakistan, corruption, anti-corruption, limited access order
Procedia PDF Downloads 30141819 Multivariate Analysis of Spectroscopic Data for Agriculture Applications
Authors: Asmaa M. Hussein, Amr Wassal, Ahmed Farouk Al-Sadek, A. F. Abd El-Rahman
Abstract:
In this study, a multivariate analysis of potato spectroscopic data was presented to detect the presence of brown rot disease or not. Near-Infrared (NIR) spectroscopy (1,350-2,500 nm) combined with multivariate analysis was used as a rapid, non-destructive technique for the detection of brown rot disease in potatoes. Spectral measurements were performed in 565 samples, which were chosen randomly at the infection place in the potato slice. In this study, 254 infected and 311 uninfected (brown rot-free) samples were analyzed using different advanced statistical analysis techniques. The discrimination performance of different multivariate analysis techniques, including classification, pre-processing, and dimension reduction, were compared. Applying a random forest algorithm classifier with different pre-processing techniques to raw spectra had the best performance as the total classification accuracy of 98.7% was achieved in discriminating infected potatoes from control.Keywords: Brown rot disease, NIR spectroscopy, potato, random forest
Procedia PDF Downloads 18941818 The Use of Network Tool for Brain Signal Data Analysis: A Case Study with Blind and Sighted Individuals
Authors: Cleiton Pons Ferreira, Diana Francisca Adamatti
Abstract:
Advancements in computers technology have allowed to obtain information for research in biology and neuroscience. In order to transform the data from these surveys, networks have long been used to represent important biological processes, changing the use of this tools from purely illustrative and didactic to more analytic, even including interaction analysis and hypothesis formulation. Many studies have involved this application, but not directly for interpretation of data obtained from brain functions, asking for new perspectives of development in neuroinformatics using existent models of tools already disseminated by the bioinformatics. This study includes an analysis of neurological data through electroencephalogram (EEG) signals, using the Cytoscape, an open source software tool for visualizing complex networks in biological databases. The data were obtained from a comparative case study developed in a research from the University of Rio Grande (FURG), using the EEG signals from a Brain Computer Interface (BCI) with 32 eletrodes prepared in the brain of a blind and a sighted individuals during the execution of an activity that stimulated the spatial ability. This study intends to present results that lead to better ways for use and adapt techniques that support the data treatment of brain signals for elevate the understanding and learning in neuroscience.Keywords: neuroinformatics, bioinformatics, network tools, brain mapping
Procedia PDF Downloads 18141817 Clash of Civilizations without Civilizational Groups: Revisiting Samuel P. Huntington's Clash of Civilizations Theory
Authors: Jamal Abdi
Abstract:
This paper offers a critique of Samuel P. Huntington's Clash of Civilizations thesis. The overriding argument is that Huntington's thesis is characterized by failure to distinguish between 'groups' and 'categories'. Multinational civilizations overcoming their internal collective action problems, which would enable them to pursue a unified strategy vis-à-vis the West, is a rather foundational assumption in his theory. Without assigning sufficient intellectual attention to the processes through which multinational civilizations may gain the capacity for concerted action, i.e., become a group, he contended that the post-cold-war world would be shaped in large measure by interactions among seven or eight major civilizations. Thus, failure in providing a convincing analysis of multi-national civilizations' transition from categories to groups is a significant weakness in Huntington's clash theory. It is also suggested that so-called Islamic terrorism and the war on terror is not to be taken as an expression of the presence of clash between a Western and an Islamic civilization, as terrorist organizations would be superfluous in a world characterized by clash of civilizations. Consequences of multinational civilizations becoming a group are discussed in relation to contemporary Western superiority.Keywords: clash of civilizations, groups, categories, groupism
Procedia PDF Downloads 20541816 Disaster Resilience Analysis of Atlanta Interstate Highway System within the Perimeter
Authors: Mengmeng Liu, J. David Frost
Abstract:
Interstate highway system within the Atlanta Perimeter plays an important role in residents’ daily life. The serious influence of Atlanta I-85 Collapses implies that transportation system in the region lacks a cohesive and comprehensive transportation plan. Therefore, disaster resilience analysis of the transportation system is necessary. Resilience is the system’s capability to persist or to maintain transportation services when exposed to changes or shocks. This paper analyzed the resilience of the whole transportation system within the Perimeter and see how removing interstates within the Perimeter will affect the resilience of the transportation system. The data used in the paper are Atlanta transportation networks and LEHD Origin-Destination Employment Statistics data. First, we calculate the traffic flow on each road section based on LEHD data assuming each trip travel along the shortest travel time paths. Second, we calculate the measure of resilience, which is flow-based connectivity and centrality of the transportation network, and see how they will change if we remove each section of interstates from the current transportation system. Finally, we get the resilience function curve of the interstates and identify the most resilient interstates section. The resilience analysis results show that the framework of calculation resilience is effective and can provide some useful information for the transportation planning and sustainability analysis of the transportation infrastructures.Keywords: connectivity, interstate highway system, network analysis, resilience analysis
Procedia PDF Downloads 25941815 Estimation of Desktop E-Wastes in Delhi Using Multivariate Flow Analysis
Authors: Sumay Bhojwani, Ashutosh Chandra, Mamita Devaburman, Akriti Bhogal
Abstract:
This article uses the Material flow analysis for estimating e-wastes in the Delhi/NCR region. The Material flow analysis is based on sales data obtained from various sources. Much of the data available for the sales is unreliable because of the existence of a huge informal sector. The informal sector in India accounts for more than 90%. Therefore, the scope of this study is only limited to the formal one. Also, for projection of the sales data till 2030, we have used regression (linear) to avoid complexity. The actual sales in the years following 2015 may vary non-linearly but we have assumed a basic linear relation. The purpose of this study was to know an approximate quantity of desktop e-wastes that we will have by the year 2030 so that we start preparing ourselves for the ineluctable investment in the treatment of these ever-rising e-wastes. The results of this study can be used to install a treatment plant for e-wastes in Delhi.Keywords: e-wastes, Delhi, desktops, estimation
Procedia PDF Downloads 25741814 Geospatial Network Analysis Using Particle Swarm Optimization
Authors: Varun Singh, Mainak Bandyopadhyay, Maharana Pratap Singh
Abstract:
The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.Keywords: particle swarm optimization, GIS, traffic data, outliers
Procedia PDF Downloads 48141813 An Analysis of the Relation between Need for Psychological Help and Psychological Symptoms
Authors: İsmail Ay
Abstract:
In this study, it was aimed to determine the relations between need for psychological help and psychological symptoms. The sample of the study consists of 530 university students getting educated in University of Atatürk in 2015-2016 academic years. Need for Psychological Help Scale and Brief Symptom Inventory were used to collect data in the study. In data analysis, correlation analysis and structural equation model with latent variables were used. Normality and homogeneity analyses were used to analyze the basic conditions of parametric tests. The findings obtained from the study show that as the psychological symptoms increase, need for psychological help also increases. The findings obtained through the study were approached according to the literature.Keywords: psychological symptoms, need for psychological help, structural equation model, correlation
Procedia PDF Downloads 36841812 The Comparison of Joint Simulation and Estimation Methods for the Geometallurgical Modeling
Authors: Farzaneh Khorram
Abstract:
This paper endeavors to construct a block model to assess grinding energy consumption (CCE) and pinpoint blocks with the highest potential for energy usage during the grinding process within a specified region. Leveraging geostatistical techniques, particularly joint estimation, or simulation, based on geometallurgical data from various mineral processing stages, our objective is to forecast CCE across the study area. The dataset encompasses variables obtained from 2754 drill samples and a block model comprising 4680 blocks. The initial analysis encompassed exploratory data examination, variography, multivariate analysis, and the delineation of geological and structural units. Subsequent analysis involved the assessment of contacts between these units and the estimation of CCE via cokriging, considering its correlation with SPI. The selection of blocks exhibiting maximum CCE holds paramount importance for cost estimation, production planning, and risk mitigation. The study conducted exploratory data analysis on lithology, rock type, and failure variables, revealing seamless boundaries between geometallurgical units. Simulation methods, such as Plurigaussian and Turning band, demonstrated more realistic outcomes compared to cokriging, owing to the inherent characteristics of geometallurgical data and the limitations of kriging methods.Keywords: geometallurgy, multivariate analysis, plurigaussian, turning band method, cokriging
Procedia PDF Downloads 6841811 Leveraging Unannotated Data to Improve Question Answering for French Contract Analysis
Authors: Touila Ahmed, Elie Louis, Hamza Gharbi
Abstract:
State of the art question answering models have recently shown impressive performance especially in a zero-shot setting. This approach is particularly useful when confronted with a highly diverse domain such as the legal field, in which it is increasingly difficult to have a dataset covering every notion and concept. In this work, we propose a flexible generative question answering approach to contract analysis as well as a weakly supervised procedure to leverage unannotated data and boost our models’ performance in general, and their zero-shot performance in particular.Keywords: question answering, contract analysis, zero-shot, natural language processing, generative models, self-supervision
Procedia PDF Downloads 19141810 The Maximum Throughput Analysis of UAV Datalink 802.11b Protocol
Authors: Inkyu Kim, SangMan Moon
Abstract:
This IEEE 802.11b protocol provides up to 11Mbps data rate, whereas aerospace industry wants to seek higher data rate COTS data link system in the UAV. The Total Maximum Throughput (TMT) and delay time are studied on many researchers in the past years This paper provides theoretical data throughput performance of UAV formation flight data link using the existing 802.11b performance theory. We operate the UAV formation flight with more than 30 quad copters with 802.11b protocol. We may be predicting that UAV formation flight numbers have to bound data link protocol performance limitations.Keywords: UAV datalink, UAV formation flight datalink, UAV WLAN datalink application, UAV IEEE 802.11b datalink application
Procedia PDF Downloads 39141809 Artificial Intelligence Assisted Sentiment Analysis of Hotel Reviews Using Topic Modeling
Authors: Sushma Ghogale
Abstract:
With a surge in user-generated content or feedback or reviews on the internet, it has become possible and important to know consumers' opinions about products and services. This data is important for both potential customers and businesses providing the services. Data from social media is attracting significant attention and has become the most prominent channel of expressing an unregulated opinion. Prospective customers look for reviews from experienced customers before deciding to buy a product or service. Several websites provide a platform for users to post their feedback for the provider and potential customers. However, the biggest challenge in analyzing such data is in extracting latent features and providing term-level analysis of the data. This paper proposes an approach to use topic modeling to classify the reviews into topics and conduct sentiment analysis to mine the opinions. This approach can analyse and classify latent topics mentioned by reviewers on business sites or review sites, or social media using topic modeling to identify the importance of each topic. It is followed by sentiment analysis to assess the satisfaction level of each topic. This approach provides a classification of hotel reviews using multiple machine learning techniques and comparing different classifiers to mine the opinions of user reviews through sentiment analysis. This experiment concludes that Multinomial Naïve Bayes classifier produces higher accuracy than other classifiers.Keywords: latent Dirichlet allocation, topic modeling, text classification, sentiment analysis
Procedia PDF Downloads 9641808 Assessment of Social Vulnerability of Urban Population to Floods – a Case Study of Mumbai
Authors: Sherly M. A., Varsha Vijaykumar, Subhankar Karmakar, Terence Chan, Christian Rau
Abstract:
This study aims at proposing an indicator-based framework for assessing social vulnerability of any coastal megacity to floods. The final set of indicators of social vulnerability are chosen from a set of feasible and available indicators which are prepared using a Geographic Information System (GIS) framework on a smaller scale considering 1-km grid cell to provide an insight into the spatial variability of vulnerability. The optimal weight for each individual indicator is assigned using data envelopment analysis (DEA) as it avoids subjective weights and improves the confidence on the results obtained. In order to de-correlate and reduce the dimension of multivariate data, principal component analysis (PCA) has been applied. The proposed methodology is demonstrated on twenty four wards of Mumbai under the jurisdiction of Municipal Corporation of Greater Mumbai (MCGM). This framework of vulnerability assessment is not limited to the present study area, and may be applied to other urban damage centers.Keywords: urban floods, vulnerability, data envelopment analysis, principal component analysis
Procedia PDF Downloads 35841807 A Study of the Adaptive Reuse for School Land Use Strategy: An Application of the Analytic Network Process and Big Data
Authors: Wann-Ming Wey
Abstract:
In today's popularity and progress of information technology, the big data set and its analysis are no longer a major conundrum. Now, we could not only use the relevant big data to analysis and emulate the possible status of urban development in the near future, but also provide more comprehensive and reasonable policy implementation basis for government units or decision-makers via the analysis and emulation results as mentioned above. In this research, we set Taipei City as the research scope, and use the relevant big data variables (e.g., population, facility utilization and related social policy ratings) and Analytic Network Process (ANP) approach to implement in-depth research and discussion for the possible reduction of land use in primary and secondary schools of Taipei City. In addition to enhance the prosperous urban activities for the urban public facility utilization, the final results of this research could help improve the efficiency of urban land use in the future. Furthermore, the assessment model and research framework established in this research also provide a good reference for schools or other public facilities land use and adaptive reuse strategies in the future.Keywords: adaptive reuse, analytic network process, big data, land use strategy
Procedia PDF Downloads 20341806 Social Media Mining with R. Twitter Analyses
Authors: Diana Codat
Abstract:
Tweets' analysis is part of text mining. Each document is a written text. It's possible to apply the usual text search techniques, in particular by switching to the bag-of-words representation. But the tweets induce peculiarities. Some may enrich the analysis. Thus, their length is calibrated (at least as far as public messages are concerned), special characters make it possible to identify authors (@) and themes (#), the tweet and retweet mechanisms make it possible to follow the diffusion of the information. Conversely, other characteristics may disrupt the analyzes. Because space is limited, authors often use abbreviations, emoticons to express feelings, and they do not pay much attention to spelling. All this creates noise that can complicate the task. The tweets carry a lot of potentially interesting information. Their exploitation is one of the main axes of the analysis of the social networks. We show how to access Twitter-related messages. We will initiate a study of the properties of the tweets, and we will follow up on the exploitation of the content of the messages. We will work under R with the package 'twitteR'. The study of tweets is a strong focus of analysis of social networks because Twitter has become an important vector of communication. This example shows that it is easy to initiate an analysis from data extracted directly online. The data preparation phase is of great importance.Keywords: data mining, language R, social networks, Twitter
Procedia PDF Downloads 18441805 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R
Authors: Pavel H. Llamocca, Victoria Lopez
Abstract:
The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.Keywords: open data, R language, data integration, environmental data
Procedia PDF Downloads 31441804 Using Risk Management Indicators in Decision Tree Analysis
Authors: Adel Ali Elshaibani
Abstract:
Risk management indicators augment the reporting infrastructure, particularly for the board and senior management, to identify, monitor, and manage risks. This enhancement facilitates improved decision-making throughout the banking organization. Decision tree analysis is a tool that visually outlines potential outcomes, costs, and consequences of complex decisions. It is particularly beneficial for analyzing quantitative data and making decisions based on numerical values. By calculating the expected value of each outcome, decision tree analysis can help assess the best course of action. In the context of banking, decision tree analysis can assist lenders in evaluating a customer’s creditworthiness, thereby preventing losses. However, applying these tools in developing countries may face several limitations, such as data availability, lack of technological infrastructure and resources, lack of skilled professionals, cultural factors, and cost. Moreover, decision trees can create overly complex models that do not generalize well to new data, known as overfitting. They can also be sensitive to small changes in the data, which can result in different tree structures and can become computationally expensive when dealing with large datasets. In conclusion, while risk management indicators and decision tree analysis are beneficial for decision-making in banks, their effectiveness is contingent upon how they are implemented and utilized by the board of directors, especially in the context of developing countries. It’s important to consider these limitations when planning to implement these tools in developing countries.Keywords: risk management indicators, decision tree analysis, developing countries, board of directors, bank performance, risk management strategy, banking institutions
Procedia PDF Downloads 58