Search results for: panel data analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 42378

Search results for: panel data analysis

41388 Fuzzy Total Factor Productivity by Credibility Theory

Authors: Shivi Agarwal, Trilok Mathur

Abstract:

This paper proposes the method to measure the total factor productivity (TFP) change by credibility theory for fuzzy input and output variables. Total factor productivity change has been widely studied with crisp input and output variables, however, in some cases, input and output data of decision-making units (DMUs) can be measured with uncertainty. These data can be represented as linguistic variable characterized by fuzzy numbers. Malmquist productivity index (MPI) is widely used to estimate the TFP change by calculating the total factor productivity of a DMU for different time periods using data envelopment analysis (DEA). The fuzzy DEA (FDEA) model is solved using the credibility theory. The results of FDEA is used to measure the TFP change for fuzzy input and output variables. Finally, numerical examples are presented to illustrate the proposed method to measure the TFP change input and output variables. The suggested methodology can be utilized for performance evaluation of DMUs and help to assess the level of integration. The methodology can also apply to rank the DMUs and can find out the DMUs that are lagging behind and make recommendations as to how they can improve their performance to bring them at par with other DMUs.

Keywords: chance-constrained programming, credibility theory, data envelopment analysis, fuzzy data, Malmquist productivity index

Procedia PDF Downloads 365
41387 A Security Cloud Storage Scheme Based Accountable Key-Policy Attribute-Based Encryption without Key Escrow

Authors: Ming Lun Wang, Yan Wang, Ning Ruo Sun

Abstract:

With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.

Keywords: cloud storage security, sharing storage, attributes, Hash algorithm

Procedia PDF Downloads 390
41386 Prevalence Of Listeria And Salmonella Contamination In Fda Recalled Foods

Authors: Oluwatofunmi Musa-Ajakaiye, Paul Olorunfemi M.D MPH, John Obafaiye

Abstract:

Introduction: The U.S Food and Drug Administration (FDA) reports the public notices for recalled FDA-regulated products over periods of time. It study reviewed the primary reasons for recalls of products of various types over a period of 7 years. Methods: The study analyzed data provided in the FDA’s archived recalls for the years 2010-2017. It identified the various reasons for product recalls in the categories of foods, beverages, drugs, medical devices, animal and veterinary products, and dietary supplements. Using SPSS version 29, descriptive statistics and chi-square analysis of the data were performed. Results (numbers, percentages, p-values, chi-square): Over the period of analysis, a total of 931 recalls were reported. The most frequent reason for recalls was undeclared products (36.7%). The analysis showed that the most recalled product type in the data set was foods and beverages, representing 591 of all recalled products (63.5%).In addition, it was observed that foods and beverages represent 77.2% of products recalled due to the presence of microorganisms. Also, a sub-group analysis of recall reasons of food and beverages found that the most prevalent reason for such recalls was undeclared products (50.1%) followed by Listeria (17.3%) then Salmonella (13.2%). Conclusion: This analysis shows that foods and beverages have the greatest percentages of total recalls due to the presence of undeclared products listeria contamination and Salmonella contamination. The prevalence of Salmonella and Listeria contamination suggests that there is a high risk of microbial contamination in FDA-approved products and further studies on the effects of such contamination must be conducted to ensure consumer safety.

Keywords: food, beverages, listeria, salmonella, FDA, contamination, microbial

Procedia PDF Downloads 63
41385 Towards a Distributed Computation Platform Tailored for Educational Process Discovery and Analysis

Authors: Awatef Hicheur Cairns, Billel Gueni, Hind Hafdi, Christian Joubert, Nasser Khelifa

Abstract:

Given the ever changing needs of the job markets, education and training centers are increasingly held accountable for student success. Therefore, education and training centers have to focus on ways to streamline their offers and educational processes in order to achieve the highest level of quality in curriculum contents and managerial decisions. Educational process mining is an emerging field in the educational data mining (EDM) discipline, concerned with developing methods to discover, analyze and provide a visual representation of complete educational processes. In this paper, we present our distributed computation platform which allows different education centers and institutions to load their data and access to advanced data mining and process mining services. To achieve this, we present also a comparative study of the different clustering techniques developed in the context of process mining to partition efficiently educational traces. Our goal is to find the best strategy for distributing heavy analysis computations on many processing nodes of our platform.

Keywords: educational process mining, distributed process mining, clustering, distributed platform, educational data mining, ProM

Procedia PDF Downloads 454
41384 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries

Authors: Gaurav Kumar Sinha

Abstract:

In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.

Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency

Procedia PDF Downloads 64
41383 On the Principle of Sustainable Development and International Law

Authors: Zhang Rui

Abstract:

Context: The paper addresses the necessity of incorporating the principle of sustainable development into international law to guide states and international organizations towards achieving this goal. Research aim: To emphasize the importance of integrating sustainable development into international law and establishing procedures to attain this objective. Methodology: The study utilizes document analysis, comparative law analysis, and international law analysis to support the argument for including sustainable development in international legal frameworks. Findings: The findings suggest that integrating sustainable development into international law can lead to significant improvements in legal practices, treaty interpretations, and state behaviors. Theoretical importance: The paper highlights the potential impacts of the principle of sustainable development on reshaping existing legal norms and promoting sustainable practices globally. Data collection: The data is gathered through the analysis of relevant legal documents, comparative studies, and international legal frameworks. Analysis procedures: The analysis involves examining how the principle of sustainable development can influence legal outcomes, treaty interpretations, and state behaviors. Questions addressed: The study addresses how the principle of sustainable development can be integrated into international law and what implications this integration can have on legal practices and state behaviors. Conclusion: Integrating sustainable development into international law is crucial for advancing global sustainability objectives and guiding states and international organizations towards sustainable practices.

Keywords: international law, sustainable development, environmental legislation, sovereign equality

Procedia PDF Downloads 20
41382 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis

Authors: Amir Hajian, Sepehr Damavandinejadmonfared

Abstract:

In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.

Keywords: biometrics, finger vein recognition, principal component analysis (PCA), kernel principal component analysis (KPCA)

Procedia PDF Downloads 365
41381 Control the Flow of Big Data

Authors: Shizra Waris, Saleem Akhtar

Abstract:

Big data is a research area receiving attention from academia and IT communities. In the digital world, the amounts of data produced and stored have within a short period of time. Consequently this fast increasing rate of data has created many challenges. In this paper, we use functionalism and structuralism paradigms to analyze the genesis of big data applications and its current trends. This paper presents a complete discussion on state-of-the-art big data technologies based on group and stream data processing. Moreover, strengths and weaknesses of these technologies are analyzed. This study also covers big data analytics techniques, processing methods, some reported case studies from different vendor, several open research challenges and the chances brought about by big data. The similarities and differences of these techniques and technologies based on important limitations are also investigated. Emerging technologies are suggested as a solution for big data problems.

Keywords: computer, it community, industry, big data

Procedia PDF Downloads 194
41380 An Overview of Domain Models of Urban Quantitative Analysis

Authors: Mohan Li

Abstract:

Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.

Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design

Procedia PDF Downloads 177
41379 Competition, Stability, and Economic Growth: A Causality Approach

Authors: Mahvish Anwaar

Abstract:

Research Question: In this paper, we explore the causal relationship between banking competition, banking stability, and economic growth. Research Findings: The unbalanced panel data starting from 2000 to 2018 is collected to analyze the causality among banking competition, banking stability, and economic growth. The main focus of the study is to check the direction of causality among selected variables. The results of the study support the demand following, supply leading, feedback, and neutrality hypothesis conditional to different measures of banking competition, banking stability, and economic growth. Theoretical Implication: Jayakumar, Pradhan, Dash, Maradana, and Gaurav (2018) proposed a theoretical model of the causal relationship between banking competition, banking stability, and economic growth by using different indicators. So, we empirically test the proposed indicators in our study. This study makes a contribution to the literature by showing the defined relationship between developing and developed countries. Policy Implications: The study covers various policy implications regarding investors to analyze how to properly manage their finances, and government agencies will take help from the present study to find the best and most suitable policies by examining how the economy can grow concerning its finances.

Keywords: competition, stability, economic growth, vector auto-regression, granger causality

Procedia PDF Downloads 63
41378 Machine Learning-Based Workflow for the Analysis of Project Portfolio

Authors: Jean Marie Tshimula, Atsushi Togashi

Abstract:

We develop a data-science approach for providing an interactive visualization and predictive models to find insights into the projects' historical data in order for stakeholders understand some unseen opportunities in the African market that might escape them behind the online project portfolio of the African Development Bank. This machine learning-based web application identifies the market trend of the fastest growing economies across the continent as well skyrocketing sectors which have a significant impact on the future of business in Africa. Owing to this, the approach is tailored to predict where the investment needs are the most required. Moreover, we create a corpus that includes the descriptions of over more than 1,200 projects that approximately cover 14 sectors designed for some of 53 African countries. Then, we sift out this large amount of semi-structured data for extracting tiny details susceptible to contain some directions to follow. In the light of the foregoing, we have applied the combination of Latent Dirichlet Allocation and Random Forests at the level of the analysis module of our methodology to highlight the most relevant topics that investors may focus on for investing in Africa.

Keywords: machine learning, topic modeling, natural language processing, big data

Procedia PDF Downloads 168
41377 The Current Status of Middle Class Internet Use in China: An Analysis Based on the Chinese General Social Survey 2015 Data and Semi-Structured Investigation

Authors: Abigail Qian Zhou

Abstract:

In today's China, the well-educated middle class, with stable jobs and above-average income, are the driving force behind its Internet society. Through the analysis of data from the 2015 Chinese General Social Survey and 50 interviewees, this study investigates the current situation of this group’s specific internet usage. The findings of this study demonstrate that daily life among the members of this socioeconomic group is closely tied to the Internet. For Chinese middle class, the Internet is used to socialize and entertain self and others. It is also used to search for and share information as well as to build their identities. The empirical results of this study will provide a reference, supported by factual data, for enterprises seeking to target the Chinese middle class through online marketing efforts.

Keywords: middle class, Internet use, network behaviour, online marketing, China

Procedia PDF Downloads 121
41376 Knowledge of Strategies to Teach Reading Components Among Teachers of Hard of Hearing Students

Authors: Khalid Alasim

Abstract:

This study investigated Saudi Arabian elementary school teachers’ knowledge of strategies to teach reading components to hard-of-hearing students. The study focused on four of the five reading components the National Reading Panel (NPR, 2000) identified: phonemic awareness; phonics; vocabulary, and reading comprehension, and explored the relationship between teachers’ demographic characteristics and their knowledge of the strategies as well. An explanatory sequential mixed methods design was used that included two phases. The quantitative phase examined the knowledge of these Arabic reading components among 89 elementary school teachers of hard-of-hearing students, and the qualitative phase consisted of interviews with 10 teachers. The results indicated that the teachers have a great deal of knowledge (above the mean score) of strategies to teach reading components. Specifically, teachers’ knowledge of strategies to teach the vocabulary component was the highest. The results also showed no significant association between teachers’ demographic characteristics and their knowledge of strategies to teach reading components. The qualitative analysis revealed two themes: 1) teachers’ lack of basic knowledge of strategies to teach reading components, and 2) the absence of in-service courses and training programs in reading for teachers.

Keywords: knowledge, reading, components, hard-of-hearing, phonology, vocabulary

Procedia PDF Downloads 80
41375 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale

Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo

Abstract:

Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.

Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)

Procedia PDF Downloads 164
41374 Strategic Management Methods in Non-Profit Making Organization

Authors: P. Řehoř, D. Holátová, V. Doležalová

Abstract:

Paper deals with analysis of strategic management methods in non-profit making organization in the Czech Republic. Strategic management represents an aggregate of methods and approaches that can be applied for managing organizations - in this article the organizations which associate owners and keepers of non-state forest properties. Authors use these methods of strategic management: analysis of stakeholders, SWOT analysis and questionnaire inquiries. The questionnaire was distributed electronically via e-mail. In October 2013 we obtained data from a total of 84 questionnaires. Based on the results the authors recommend the using of confrontation strategy which improves the competitiveness of non-profit making organizations.

Keywords: strategic management, non-profit making organization, strategy analysis, SWOT analysis, strategy, competitiveness

Procedia PDF Downloads 483
41373 Advancement of Computer Science Research in Nigeria: A Bibliometric Analysis of the Past Three Decades

Authors: Temidayo O. Omotehinwa, David O. Oyewola, Friday J. Agbo

Abstract:

This study aims to gather a proper perspective of the development landscape of Computer Science research in Nigeria. Therefore, a bibliometric analysis of 4,333 bibliographic records of Computer Science research in Nigeria in the last 31 years (1991-2021) was carried out. The bibliographic data were extracted from the Scopus database and analyzed using VOSviewer and the bibliometrix R package through the biblioshiny web interface. The findings of this study revealed that Computer Science research in Nigeria has a growth rate of 24.19%. The most developed and well-studied research areas in the Computer Science field in Nigeria are machine learning, data mining, and deep learning. The social structure analysis result revealed that there is a need for improved international collaborations. Sparsely established collaborations are largely influenced by geographic proximity. The funding analysis result showed that Computer Science research in Nigeria is under-funded. The findings of this study will be useful for researchers conducting Computer Science related research. Experts can gain insights into how to develop a strategic framework that will advance the field in a more impactful manner. Government agencies and policymakers can also utilize the outcome of this research to develop strategies for improved funding for Computer Science research.

Keywords: bibliometric analysis, biblioshiny, computer science, Nigeria, science mapping

Procedia PDF Downloads 112
41372 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility

Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha

Abstract:

Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.

Keywords: data citation, data reuse, research data sharing, webometrics

Procedia PDF Downloads 178
41371 A Critical Discourse Analysis of Jamaican and Trinidadian News Articles about D/Deafness

Authors: Melissa Angus Baboun

Abstract:

Utilizing a Critical Discourse Analysis (CDA) methodology and a theoretical framework based on disability studies, how Jamaican and Trinidadian newspapers discussed issues relating to the Deaf community were examined. The term deaf was inputted into the search engine tool of the online website for the Jamaica Observer and the Trinidad & Tobago Guardian. All 27 articles that contained the term deaf in its content and were written between August 1, 2017 and November 15, 2017 were chosen for the study. The data analysis was divided into three steps: (1) listing and analysis instances of metaphorical deafness (e.g. fall on deaf ears), (2) categorization of the content of the articles into the models of disability discourse (the medical, socio-cultural, and superscrip models of disability narratives), and (3) the analysis of any additional data found. A total of 42% of the articles pulled for this study did not deal with the Deaf community in any capacity, but rather instances of the use of idiomatic expressions that use deafness as a metaphor for a non-physical, undesirable trait. The most common idiomatic expression found was fall on deaf ears. Regarding the models of disability discourse, eight articles were found to follow the socio-cultural model, two were found to follow the medical model, and two were found to follow the superscrip model. The additional data found in these articles include two instances of the term deaf and mute, an overwhelming use of lower case d for the term deaf, and the misuse of the term translator (to mean interpreter).

Keywords: deafness, disability, news coverage, Caribbean newspapers

Procedia PDF Downloads 233
41370 Consumer Over-Indebtedness in Germany: An Investigation of Key Determinants

Authors: Xiaojing Wang, Ann-Marie Ward, Tony Wall

Abstract:

The problem of over-indebtedness has increased since deregulation of the banking industry in the 1980s, and now it has become a major problem for most countries in Europe, including Germany. Consumer debt issues have attracted not only the attention of academics but also government and debt counselling institutions. Overall, this research aims to contribute to the knowledge gap regarding the causes of consumer over-indebtedness in Germany and to develop predictive models for assessing consumer over-indebtedness risk at consumer level. The situation of consumer over-indebtedness is serious in Germany. The relatively high level of social welfare support in Germany suggests that consumer debt problems are caused by other factors, other than just over-spending and income volatility. Prior literature suggests that the overall stability of the economy and level of welfare support for individuals from the structural environment contributes to consumers’ debt problems. In terms of cultural influence, the conspicuous consumption theory in consumer behaviour suggests that consumers would spend more than their means to be seen as similar profiles to consumers in a higher socio-economic class. This results in consumers taking on more debt than they can afford, and eventually becoming over-indebted. Studies have also shown that financial literacy is negatively related to consumer over-indebtedness risk. Whilst prior literature has examined structural and cultural influences respectively, no study has taken a collective approach. To address this gap, a model is developed to investigate the association between consumer over-indebtedness and proxies for influences from the structural and cultural environment based on the above theories. The model also controls for consumer demographic characteristics identified as being of influence in prior literature, such as gender and age, and adverse shocks, such as divorce or bereavement in the household. Benefiting from SOEP regional data, this study is able to conduct quantitative empirical analysis to test both structural and cultural influences at a localised level. Using German Socio-Economic Panel (SOEP) study data from 2006 to 2016, this study finds that social benefits, financial literacy and the existence of conspicuous consumption all contribute to being over-indebted. Generally speaking, the risk of becoming over-indebted is high when consumers are in a low-welfare community, have little awareness of their own financial situation and always over-spend. In order to tackle the problem of over-indebtedness, countermeasures can be taken, for example, increasing consumers’ financial awareness, and the level of welfare support. By analysing causes of consumer over-indebtedness in Germany, this study also provides new insights on the nature and underlying causes of consumer debt issues in Europe.

Keywords: consumer, debt, financial literacy, socio-economic

Procedia PDF Downloads 211
41369 A Regression Analysis Study of the Applicability of Side Scan Sonar based Safety Inspection of Underwater Structures

Authors: Chul Park, Youngseok Kim, Sangsik Choi

Abstract:

This study developed an electric jig for underwater structure inspection in order to solve the problem of the application of side scan sonar to underwater inspection, and analyzed correlations of empirical data in order to enhance sonar data resolution. For the application of tow-typed sonar to underwater structure inspection, an electric jig was developed. In fact, it was difficult to inspect a cross-section at the time of inspection with tow-typed equipment. With the development of the electric jig for underwater structure inspection, it was possible to shorten an inspection time over 20%, compared to conventional tow-typed side scan sonar, and to inspect a proper cross-section through accurate angle control. The indoor test conducted to enhance sonar data resolution proved that a water depth, the distance from an underwater structure, and a filming angle influenced a resolution and data quality. Based on the data accumulated through field experience, multiple regression analysis was conducted on correlations between three variables. As a result, the relational equation of sonar operation according to a water depth was drawn.

Keywords: underwater structure, SONAR, safety inspection, resolution

Procedia PDF Downloads 265
41368 Public Libraries as Social Spaces for Vulnerable Populations

Authors: Natalie Malone

Abstract:

This study explores the role of a public library in the creation of social spaces for vulnerable populations. The data stems from a longitudinal ethnographic study of the Anderson Library community, which included field notes, artifacts, and interview data. Thematic analysis revealed multiple meanings and thematic relationships within and among the data sources -interviews, field notes, and artifacts. Initial analysis suggests the Anderson Library serves as a space for vulnerable populations, with the sub-themes of fostering interpersonal communication to create a social space for children and fostering interpersonal communication to create a social space for parents and adults. These findings are important as they illustrate the potential of public libraries to serve as community empowering institutions.

Keywords: capital, immigrant families, public libraries, space, vulnerable

Procedia PDF Downloads 151
41367 Study of a Few Additional Posterior Projection Data to 180° Acquisition for Myocardial SPECT

Authors: Yasuyuki Takahashi, Hirotaka Shimada, Takao Kanzaki

Abstract:

A Dual-detector SPECT system is widely by use of myocardial SPECT studies. With 180-degree (180°) acquisition, reconstructed images are distorted in the posterior wall of myocardium due to the lack of sufficient data of posterior projection. We hypothesized that quality of myocardial SPECT images can be improved by the addition of data acquisition of only a few posterior projections to ordinary 180° acquisition. The proposed acquisition method (180° plus acquisition methods) uses the dual-detector SPECT system with a pair of detector arranged in 90° perpendicular. Sampling angle was 5°, and the acquisition range was 180° from 45° right anterior oblique to 45° left posterior oblique. After the acquisition of 180°, the detector moved to additional acquisition position of reverse side once for 2 projections, twice for 4 projections, or 3 times for 6 projections. Since these acquisition methods cannot be done in the present system, actual data acquisition was done by 360° with a sampling angle of 5°, and projection data corresponding to above acquisition position were extracted for reconstruction. We underwent the phantom studies and a clinical study. SPECT images were compared by profile curve analysis and also quantitatively by contrast ratio. The distortion was improved by 180° plus method. Profile curve analysis showed increased of cardiac cavity. Analysis with contrast ratio revealed that SPECT images of the phantoms and the clinical study were improved from 180° acquisition by the present methods. The difference in the contrast was not clearly recognized between 180° plus 2 projections, 180° plus 4 projections, and 180° plus 6 projections. 180° plus 2 projections method may be feasible for myocardial SPECT because distortion of the image and the contrast were improved.

Keywords: 180° plus acquisition method, a few posterior projections, dual-detector SPECT system, myocardial SPECT

Procedia PDF Downloads 295
41366 Fault Tree Analysis (FTA) of CNC Turning Center

Authors: R. B. Patil, B. S. Kothavale, L. Y. Waghmode

Abstract:

Today, the CNC turning center becomes an important machine tool for manufacturing industry worldwide. However, as the breakdown of a single CNC turning center may result in the production of an entire plant being halted. For this reason, operations and preventive maintenance have to be minimized to ensure availability of the system. Indeed, improving the availability of the CNC turning center as a whole, objectively leads to a substantial reduction in production loss, operating, maintenance and support cost. In this paper, fault tree analysis (FTA) method is used for reliability analysis of CNC turning center. The major faults associated with the system and the causes for the faults are presented graphically. Boolean algebra is used for evaluating fault tree (FT) diagram and for deriving governing reliability model for CNC turning center. Failure data over a period of six years has been collected and used for evaluating the model. Qualitative and quantitative analysis is also carried out to identify critical sub-systems and components of CNC turning center. It is found that, at the end of the warranty period (one year), the reliability of the CNC turning center as a whole is around 0.61628.

Keywords: fault tree analysis (FTA), reliability analysis, risk assessment, hazard analysis

Procedia PDF Downloads 414
41365 Using Artificial Intelligence Method to Explore the Important Factors in the Reuse of Telecare by the Elderly

Authors: Jui-Chen Huang

Abstract:

This research used artificial intelligence method to explore elderly’s opinions on the reuse of telecare, its effect on their service quality, satisfaction and the relationship between customer perceived value and intention to reuse. This study conducted a questionnaire survey on the elderly. A total of 124 valid copies of a questionnaire were obtained. It adopted Backpropagation Network (BPN) to propose an effective and feasible analysis method, which is different from the traditional method. Two third of the total samples (82 samples) were taken as the training data, and the one third of the samples (42 samples) were taken as the testing data. The training and testing data RMSE (root mean square error) are 0.022 and 0.009 in the BPN, respectively. As shown, the errors are acceptable. On the other hand, the training and testing data RMSE are 0.100 and 0.099 in the regression model, respectively. In addition, the results showed the service quality has the greatest effects on the intention to reuse, followed by the satisfaction, and perceived value. This result of the Backpropagation Network method is better than the regression analysis. This result can be used as a reference for future research.

Keywords: artificial intelligence, backpropagation network (BPN), elderly, reuse, telecare

Procedia PDF Downloads 212
41364 Capturing Public Voices: The Role of Social Media in Heritage Management

Authors: Mahda Foroughi, Bruno de Anderade, Ana Pereira Roders

Abstract:

Social media platforms have been increasingly used by locals and tourists to express their opinions about buildings, cities, and built heritage in particular. Most recently, scholars have been using social media to conduct innovative research on built heritage and heritage management. Still, the application of artificial intelligence (AI) methods to analyze social media data for heritage management is seldom explored. This paper investigates the potential of short texts (sentences and hashtags) shared through social media as a data source and artificial intelligence methods for data analysis for revealing the cultural significance (values and attributes) of built heritage. The city of Yazd, Iran, was taken as a case study, with a particular focus on windcatchers, key attributes conveying outstanding universal values, as inscribed on the UNESCO World Heritage List. This paper has three subsequent phases: 1) state of the art on the intersection of public participation in heritage management and social media research; 2) methodology of data collection and data analysis related to coding people's voices from Instagram and Twitter into values of windcatchers over the last ten-years; 3) preliminary findings on the comparison between opinions of locals and tourists, sentiment analysis, and its association with the values and attributes of windcatchers. Results indicate that the age value is recognized as the most important value by all interest groups, while the political value is the least acknowledged. Besides, the negative sentiments are scarcely reflected (e.g., critiques) in social media. Results confirm the potential of social media for heritage management in terms of (de)coding and measuring the cultural significance of built heritage for windcatchers in Yazd. The methodology developed in this paper can be applied to other attributes in Yazd and also to other case studies.

Keywords: social media, artificial intelligence, public participation, cultural significance, heritage, sentiment analysis

Procedia PDF Downloads 113
41363 Performance Evaluation and Planning for Road Safety Measures Using Data Envelopment Analysis and Fuzzy Decision Making

Authors: Hamid Reza Behnood, Esmaeel Ayati, Tom Brijs, Mohammadali Pirayesh Neghab

Abstract:

Investment projects in road safety planning can benefit from an effectiveness evaluation regarding their expected safety outcomes. The objective of this study is to develop a decision support system (DSS) to support policymakers in taking the right choice in road safety planning based on the efficiency of previously implemented safety measures in a set of regions in Iran. The measures considered for each region in the study include performance indicators about (1) police operations, (2) treated black spots, (3) freeway and highway facility supplies, (4) speed control cameras, (5) emergency medical services, and (6) road lighting projects. To this end, inefficiency measure is calculated, defined by the proportion of fatality rates in relation to the combined measure of road safety performance indicators (i.e., road safety measures) which should be minimized. The relative inefficiency for each region is modeled by the Data Envelopment Analysis (DEA) technique. In a next step, a fuzzy decision-making system is constructed to convert the information obtained from the DEA analysis into a rule-based system that can be used by policy makers to evaluate the expected outcomes of certain alternative investment strategies in road safety.

Keywords: performance indicators, road safety, decision support system, data envelopment analysis, fuzzy reasoning

Procedia PDF Downloads 353
41362 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 194
41361 Design and Development of a Computerized Medical Record System for Hospitals in Remote Areas

Authors: Grace Omowunmi Soyebi

Abstract:

A computerized medical record system is a collection of medical information about a person that is stored on a computer. One principal problem of most hospitals in rural areas is using the file management system for keeping records. A lot of time is wasted when a patient visits the hospital, probably in an emergency, and the nurse or attendant has to search through voluminous files before the patient's file can be retrieved, this may cause an unexpected to happen to the patient. This Data Mining application is to be designed using a Structured System Analysis and design method which will help in a well-articulated analysis of the existing file management system, feasibility study, and proper documentation of the Design and Implementation of a Computerized medical record system. This Computerized system will replace the file management system and help to quickly retrieve a patient's record with increased data security, access clinical records for decision-making, and reduce the time range at which a patient gets attended to.

Keywords: programming, computing, data, innovation

Procedia PDF Downloads 119
41360 Forecasting Regional Data Using Spatial Vars

Authors: Taisiia Gorshkova

Abstract:

Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regions

Keywords: forecasting, regional data, spatial econometrics, vector autoregression

Procedia PDF Downloads 141
41359 Using HABIT to Establish the Chemicals Analysis Methodology for Maanshan Nuclear Power Plant

Authors: J. R. Wang, S. W. Chen, Y. Chiang, W. S. Hsu, J. H. Yang, Y. S. Tseng, C. Shih

Abstract:

In this research, the HABIT analysis methodology was established for Maanshan nuclear power plant (NPP). The Final Safety Analysis Report (FSAR), reports, and other data were used in this study. To evaluate the control room habitability under the CO2 storage burst, the HABIT methodology was used to perform this analysis. The HABIT result was below the R.G. 1.78 failure criteria. This indicates that Maanshan NPP habitability can be maintained. Additionally, the sensitivity study of the parameters (wind speed, atmospheric stability classification, air temperature, and control room intake flow rate) was also performed in this research.

Keywords: PWR, HABIT, Habitability, Maanshan

Procedia PDF Downloads 445