Search results for: ERA-5 analysis data
41794 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation
Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves
Abstract:
Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP
Procedia PDF Downloads 10141793 Swot Analysis for Employment of Graduates of Physical Education and Sport Sciences in Iran
Authors: Mohammad Reza Boroumand Devlagh
Abstract:
Employment problem, especially university graduates is the most important challenges in the decade ahead. The purpose of this study is the SWOT analysis for employment of graduates of Physical Education and Sport Sciences in Iran. The sample of this research consist of 115 (35.5 + 8.0 years) of physical education and sport sciences faculty members of higher education institutions, major sport managers and graduates of physical education and sport sciences. Library method, interview and questioners were used to collect data. The questionnaires were made in four parts: Strengths, Weaknesses, Opportunities and Threats with Cronbach's alpha coefficient of 0.94. After data collection, means, standard deviation (SD) and percentage were calculated by using SPSS software. Fridman was used for the statical analysis at P < 0.05. The results showed that Employment of graduates of Physical Education and Sport Sciences in Iran Located In the worst position possible (T-W area) in Strategic Position and Action Evaluation Matrix) SPACEM), and there are more weaknesses than strengths (2.02 < 2.5) in internal evaluation and there are more threats than opportunities(2.36 < 2.5) in external evaluation.Keywords: employment, graduate, physical education and sport sciences, SWOT analysis
Procedia PDF Downloads 54041792 Fuzzy Total Factor Productivity by Credibility Theory
Authors: Shivi Agarwal, Trilok Mathur
Abstract:
This paper proposes the method to measure the total factor productivity (TFP) change by credibility theory for fuzzy input and output variables. Total factor productivity change has been widely studied with crisp input and output variables, however, in some cases, input and output data of decision-making units (DMUs) can be measured with uncertainty. These data can be represented as linguistic variable characterized by fuzzy numbers. Malmquist productivity index (MPI) is widely used to estimate the TFP change by calculating the total factor productivity of a DMU for different time periods using data envelopment analysis (DEA). The fuzzy DEA (FDEA) model is solved using the credibility theory. The results of FDEA is used to measure the TFP change for fuzzy input and output variables. Finally, numerical examples are presented to illustrate the proposed method to measure the TFP change input and output variables. The suggested methodology can be utilized for performance evaluation of DMUs and help to assess the level of integration. The methodology can also apply to rank the DMUs and can find out the DMUs that are lagging behind and make recommendations as to how they can improve their performance to bring them at par with other DMUs.Keywords: chance-constrained programming, credibility theory, data envelopment analysis, fuzzy data, Malmquist productivity index
Procedia PDF Downloads 36741791 A Security Cloud Storage Scheme Based Accountable Key-Policy Attribute-Based Encryption without Key Escrow
Authors: Ming Lun Wang, Yan Wang, Ning Ruo Sun
Abstract:
With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.Keywords: cloud storage security, sharing storage, attributes, Hash algorithm
Procedia PDF Downloads 39041790 Prevalence Of Listeria And Salmonella Contamination In Fda Recalled Foods
Authors: Oluwatofunmi Musa-Ajakaiye, Paul Olorunfemi M.D MPH, John Obafaiye
Abstract:
Introduction: The U.S Food and Drug Administration (FDA) reports the public notices for recalled FDA-regulated products over periods of time. It study reviewed the primary reasons for recalls of products of various types over a period of 7 years. Methods: The study analyzed data provided in the FDA’s archived recalls for the years 2010-2017. It identified the various reasons for product recalls in the categories of foods, beverages, drugs, medical devices, animal and veterinary products, and dietary supplements. Using SPSS version 29, descriptive statistics and chi-square analysis of the data were performed. Results (numbers, percentages, p-values, chi-square): Over the period of analysis, a total of 931 recalls were reported. The most frequent reason for recalls was undeclared products (36.7%). The analysis showed that the most recalled product type in the data set was foods and beverages, representing 591 of all recalled products (63.5%).In addition, it was observed that foods and beverages represent 77.2% of products recalled due to the presence of microorganisms. Also, a sub-group analysis of recall reasons of food and beverages found that the most prevalent reason for such recalls was undeclared products (50.1%) followed by Listeria (17.3%) then Salmonella (13.2%). Conclusion: This analysis shows that foods and beverages have the greatest percentages of total recalls due to the presence of undeclared products listeria contamination and Salmonella contamination. The prevalence of Salmonella and Listeria contamination suggests that there is a high risk of microbial contamination in FDA-approved products and further studies on the effects of such contamination must be conducted to ensure consumer safety.Keywords: food, beverages, listeria, salmonella, FDA, contamination, microbial
Procedia PDF Downloads 6541789 Towards a Distributed Computation Platform Tailored for Educational Process Discovery and Analysis
Authors: Awatef Hicheur Cairns, Billel Gueni, Hind Hafdi, Christian Joubert, Nasser Khelifa
Abstract:
Given the ever changing needs of the job markets, education and training centers are increasingly held accountable for student success. Therefore, education and training centers have to focus on ways to streamline their offers and educational processes in order to achieve the highest level of quality in curriculum contents and managerial decisions. Educational process mining is an emerging field in the educational data mining (EDM) discipline, concerned with developing methods to discover, analyze and provide a visual representation of complete educational processes. In this paper, we present our distributed computation platform which allows different education centers and institutions to load their data and access to advanced data mining and process mining services. To achieve this, we present also a comparative study of the different clustering techniques developed in the context of process mining to partition efficiently educational traces. Our goal is to find the best strategy for distributing heavy analysis computations on many processing nodes of our platform.Keywords: educational process mining, distributed process mining, clustering, distributed platform, educational data mining, ProM
Procedia PDF Downloads 45441788 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries
Authors: Gaurav Kumar Sinha
Abstract:
In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency
Procedia PDF Downloads 6441787 On the Principle of Sustainable Development and International Law
Authors: Zhang Rui
Abstract:
Context: The paper addresses the necessity of incorporating the principle of sustainable development into international law to guide states and international organizations towards achieving this goal. Research aim: To emphasize the importance of integrating sustainable development into international law and establishing procedures to attain this objective. Methodology: The study utilizes document analysis, comparative law analysis, and international law analysis to support the argument for including sustainable development in international legal frameworks. Findings: The findings suggest that integrating sustainable development into international law can lead to significant improvements in legal practices, treaty interpretations, and state behaviors. Theoretical importance: The paper highlights the potential impacts of the principle of sustainable development on reshaping existing legal norms and promoting sustainable practices globally. Data collection: The data is gathered through the analysis of relevant legal documents, comparative studies, and international legal frameworks. Analysis procedures: The analysis involves examining how the principle of sustainable development can influence legal outcomes, treaty interpretations, and state behaviors. Questions addressed: The study addresses how the principle of sustainable development can be integrated into international law and what implications this integration can have on legal practices and state behaviors. Conclusion: Integrating sustainable development into international law is crucial for advancing global sustainability objectives and guiding states and international organizations towards sustainable practices.Keywords: international law, sustainable development, environmental legislation, sovereign equality
Procedia PDF Downloads 2141786 A Nexus between Financial Development and Its Determinants: A Panel Data Analysis from a Global Perspective
Authors: Bilal Ashraf, Qianxiao Zhang
Abstract:
This study empirically investigated the linkage amid financial development and its important determinants such as information and communication technology, natural resource rents, economic growth, current account balance, and gross savings in 107 economies. This paper preferred to employ the second-generation unit root tests to handle the issues of slope heterogeneity and “cross-sectional dependence” in panel data. The “Kao, Pedroni, and Westerlund tests” confirm the long-lasting connections among the variables under study, while the significant endings of “cross-sectionally augmented autoregressive distributed lag (CS-ARDL)” exposed that NRR, CAB, and S negatively affected the financial development while ICT and EG stimulates the procedure of FD. Further, the robustness analysis's application of FGLS supports the appropriateness and applicability of CS-ARDL. Finally, the findings of “DH causality analysis” endorse the bidirectional causality linkages amongst research factors. Based on the study's outcomes, we suggest some policy suggestions that empower the process of financial development, globally.Keywords: determinants of financial developments, CS-ARDL, financial development, global sample, causality analysis
Procedia PDF Downloads 6141785 Control the Flow of Big Data
Authors: Shizra Waris, Saleem Akhtar
Abstract:
Big data is a research area receiving attention from academia and IT communities. In the digital world, the amounts of data produced and stored have within a short period of time. Consequently this fast increasing rate of data has created many challenges. In this paper, we use functionalism and structuralism paradigms to analyze the genesis of big data applications and its current trends. This paper presents a complete discussion on state-of-the-art big data technologies based on group and stream data processing. Moreover, strengths and weaknesses of these technologies are analyzed. This study also covers big data analytics techniques, processing methods, some reported case studies from different vendor, several open research challenges and the chances brought about by big data. The similarities and differences of these techniques and technologies based on important limitations are also investigated. Emerging technologies are suggested as a solution for big data problems.Keywords: computer, it community, industry, big data
Procedia PDF Downloads 19441784 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis
Authors: Amir Hajian, Sepehr Damavandinejadmonfared
Abstract:
In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.Keywords: biometrics, finger vein recognition, principal component analysis (PCA), kernel principal component analysis (KPCA)
Procedia PDF Downloads 36541783 An Overview of Domain Models of Urban Quantitative Analysis
Authors: Mohan Li
Abstract:
Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design
Procedia PDF Downloads 17741782 Machine Learning-Based Workflow for the Analysis of Project Portfolio
Authors: Jean Marie Tshimula, Atsushi Togashi
Abstract:
We develop a data-science approach for providing an interactive visualization and predictive models to find insights into the projects' historical data in order for stakeholders understand some unseen opportunities in the African market that might escape them behind the online project portfolio of the African Development Bank. This machine learning-based web application identifies the market trend of the fastest growing economies across the continent as well skyrocketing sectors which have a significant impact on the future of business in Africa. Owing to this, the approach is tailored to predict where the investment needs are the most required. Moreover, we create a corpus that includes the descriptions of over more than 1,200 projects that approximately cover 14 sectors designed for some of 53 African countries. Then, we sift out this large amount of semi-structured data for extracting tiny details susceptible to contain some directions to follow. In the light of the foregoing, we have applied the combination of Latent Dirichlet Allocation and Random Forests at the level of the analysis module of our methodology to highlight the most relevant topics that investors may focus on for investing in Africa.Keywords: machine learning, topic modeling, natural language processing, big data
Procedia PDF Downloads 16841781 The Current Status of Middle Class Internet Use in China: An Analysis Based on the Chinese General Social Survey 2015 Data and Semi-Structured Investigation
Authors: Abigail Qian Zhou
Abstract:
In today's China, the well-educated middle class, with stable jobs and above-average income, are the driving force behind its Internet society. Through the analysis of data from the 2015 Chinese General Social Survey and 50 interviewees, this study investigates the current situation of this group’s specific internet usage. The findings of this study demonstrate that daily life among the members of this socioeconomic group is closely tied to the Internet. For Chinese middle class, the Internet is used to socialize and entertain self and others. It is also used to search for and share information as well as to build their identities. The empirical results of this study will provide a reference, supported by factual data, for enterprises seeking to target the Chinese middle class through online marketing efforts.Keywords: middle class, Internet use, network behaviour, online marketing, China
Procedia PDF Downloads 12341780 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility
Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha
Abstract:
Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.Keywords: data citation, data reuse, research data sharing, webometrics
Procedia PDF Downloads 17841779 Advancement of Computer Science Research in Nigeria: A Bibliometric Analysis of the Past Three Decades
Authors: Temidayo O. Omotehinwa, David O. Oyewola, Friday J. Agbo
Abstract:
This study aims to gather a proper perspective of the development landscape of Computer Science research in Nigeria. Therefore, a bibliometric analysis of 4,333 bibliographic records of Computer Science research in Nigeria in the last 31 years (1991-2021) was carried out. The bibliographic data were extracted from the Scopus database and analyzed using VOSviewer and the bibliometrix R package through the biblioshiny web interface. The findings of this study revealed that Computer Science research in Nigeria has a growth rate of 24.19%. The most developed and well-studied research areas in the Computer Science field in Nigeria are machine learning, data mining, and deep learning. The social structure analysis result revealed that there is a need for improved international collaborations. Sparsely established collaborations are largely influenced by geographic proximity. The funding analysis result showed that Computer Science research in Nigeria is under-funded. The findings of this study will be useful for researchers conducting Computer Science related research. Experts can gain insights into how to develop a strategic framework that will advance the field in a more impactful manner. Government agencies and policymakers can also utilize the outcome of this research to develop strategies for improved funding for Computer Science research.Keywords: bibliometric analysis, biblioshiny, computer science, Nigeria, science mapping
Procedia PDF Downloads 11241778 Strategic Management Methods in Non-Profit Making Organization
Authors: P. Řehoř, D. Holátová, V. Doležalová
Abstract:
Paper deals with analysis of strategic management methods in non-profit making organization in the Czech Republic. Strategic management represents an aggregate of methods and approaches that can be applied for managing organizations - in this article the organizations which associate owners and keepers of non-state forest properties. Authors use these methods of strategic management: analysis of stakeholders, SWOT analysis and questionnaire inquiries. The questionnaire was distributed electronically via e-mail. In October 2013 we obtained data from a total of 84 questionnaires. Based on the results the authors recommend the using of confrontation strategy which improves the competitiveness of non-profit making organizations.Keywords: strategic management, non-profit making organization, strategy analysis, SWOT analysis, strategy, competitiveness
Procedia PDF Downloads 48441777 A Critical Discourse Analysis of Jamaican and Trinidadian News Articles about D/Deafness
Authors: Melissa Angus Baboun
Abstract:
Utilizing a Critical Discourse Analysis (CDA) methodology and a theoretical framework based on disability studies, how Jamaican and Trinidadian newspapers discussed issues relating to the Deaf community were examined. The term deaf was inputted into the search engine tool of the online website for the Jamaica Observer and the Trinidad & Tobago Guardian. All 27 articles that contained the term deaf in its content and were written between August 1, 2017 and November 15, 2017 were chosen for the study. The data analysis was divided into three steps: (1) listing and analysis instances of metaphorical deafness (e.g. fall on deaf ears), (2) categorization of the content of the articles into the models of disability discourse (the medical, socio-cultural, and superscrip models of disability narratives), and (3) the analysis of any additional data found. A total of 42% of the articles pulled for this study did not deal with the Deaf community in any capacity, but rather instances of the use of idiomatic expressions that use deafness as a metaphor for a non-physical, undesirable trait. The most common idiomatic expression found was fall on deaf ears. Regarding the models of disability discourse, eight articles were found to follow the socio-cultural model, two were found to follow the medical model, and two were found to follow the superscrip model. The additional data found in these articles include two instances of the term deaf and mute, an overwhelming use of lower case d for the term deaf, and the misuse of the term translator (to mean interpreter).Keywords: deafness, disability, news coverage, Caribbean newspapers
Procedia PDF Downloads 23341776 A Regression Analysis Study of the Applicability of Side Scan Sonar based Safety Inspection of Underwater Structures
Authors: Chul Park, Youngseok Kim, Sangsik Choi
Abstract:
This study developed an electric jig for underwater structure inspection in order to solve the problem of the application of side scan sonar to underwater inspection, and analyzed correlations of empirical data in order to enhance sonar data resolution. For the application of tow-typed sonar to underwater structure inspection, an electric jig was developed. In fact, it was difficult to inspect a cross-section at the time of inspection with tow-typed equipment. With the development of the electric jig for underwater structure inspection, it was possible to shorten an inspection time over 20%, compared to conventional tow-typed side scan sonar, and to inspect a proper cross-section through accurate angle control. The indoor test conducted to enhance sonar data resolution proved that a water depth, the distance from an underwater structure, and a filming angle influenced a resolution and data quality. Based on the data accumulated through field experience, multiple regression analysis was conducted on correlations between three variables. As a result, the relational equation of sonar operation according to a water depth was drawn.Keywords: underwater structure, SONAR, safety inspection, resolution
Procedia PDF Downloads 26541775 Public Libraries as Social Spaces for Vulnerable Populations
Authors: Natalie Malone
Abstract:
This study explores the role of a public library in the creation of social spaces for vulnerable populations. The data stems from a longitudinal ethnographic study of the Anderson Library community, which included field notes, artifacts, and interview data. Thematic analysis revealed multiple meanings and thematic relationships within and among the data sources -interviews, field notes, and artifacts. Initial analysis suggests the Anderson Library serves as a space for vulnerable populations, with the sub-themes of fostering interpersonal communication to create a social space for children and fostering interpersonal communication to create a social space for parents and adults. These findings are important as they illustrate the potential of public libraries to serve as community empowering institutions.Keywords: capital, immigrant families, public libraries, space, vulnerable
Procedia PDF Downloads 15441774 Study of a Few Additional Posterior Projection Data to 180° Acquisition for Myocardial SPECT
Authors: Yasuyuki Takahashi, Hirotaka Shimada, Takao Kanzaki
Abstract:
A Dual-detector SPECT system is widely by use of myocardial SPECT studies. With 180-degree (180°) acquisition, reconstructed images are distorted in the posterior wall of myocardium due to the lack of sufficient data of posterior projection. We hypothesized that quality of myocardial SPECT images can be improved by the addition of data acquisition of only a few posterior projections to ordinary 180° acquisition. The proposed acquisition method (180° plus acquisition methods) uses the dual-detector SPECT system with a pair of detector arranged in 90° perpendicular. Sampling angle was 5°, and the acquisition range was 180° from 45° right anterior oblique to 45° left posterior oblique. After the acquisition of 180°, the detector moved to additional acquisition position of reverse side once for 2 projections, twice for 4 projections, or 3 times for 6 projections. Since these acquisition methods cannot be done in the present system, actual data acquisition was done by 360° with a sampling angle of 5°, and projection data corresponding to above acquisition position were extracted for reconstruction. We underwent the phantom studies and a clinical study. SPECT images were compared by profile curve analysis and also quantitatively by contrast ratio. The distortion was improved by 180° plus method. Profile curve analysis showed increased of cardiac cavity. Analysis with contrast ratio revealed that SPECT images of the phantoms and the clinical study were improved from 180° acquisition by the present methods. The difference in the contrast was not clearly recognized between 180° plus 2 projections, 180° plus 4 projections, and 180° plus 6 projections. 180° plus 2 projections method may be feasible for myocardial SPECT because distortion of the image and the contrast were improved.Keywords: 180° plus acquisition method, a few posterior projections, dual-detector SPECT system, myocardial SPECT
Procedia PDF Downloads 29641773 Using Artificial Intelligence Method to Explore the Important Factors in the Reuse of Telecare by the Elderly
Authors: Jui-Chen Huang
Abstract:
This research used artificial intelligence method to explore elderly’s opinions on the reuse of telecare, its effect on their service quality, satisfaction and the relationship between customer perceived value and intention to reuse. This study conducted a questionnaire survey on the elderly. A total of 124 valid copies of a questionnaire were obtained. It adopted Backpropagation Network (BPN) to propose an effective and feasible analysis method, which is different from the traditional method. Two third of the total samples (82 samples) were taken as the training data, and the one third of the samples (42 samples) were taken as the testing data. The training and testing data RMSE (root mean square error) are 0.022 and 0.009 in the BPN, respectively. As shown, the errors are acceptable. On the other hand, the training and testing data RMSE are 0.100 and 0.099 in the regression model, respectively. In addition, the results showed the service quality has the greatest effects on the intention to reuse, followed by the satisfaction, and perceived value. This result of the Backpropagation Network method is better than the regression analysis. This result can be used as a reference for future research.Keywords: artificial intelligence, backpropagation network (BPN), elderly, reuse, telecare
Procedia PDF Downloads 21341772 A Landscape of Research Data Repositories in Re3data.org Registry: A Case Study of Indian Repositories
Authors: Prashant Shrivastava
Abstract:
The purpose of this study is to explore re3dat.org registry to identify research data repositories registration workflow process. Further objective is to depict a graph for present development of research data repositories in India. Preliminarily with an approach to understand re3data.org registry framework and schema design then further proceed to explore the status of research data repositories of India in re3data.org registry. Research data repositories are getting wider relevance due to e-research concepts. Now available registry re3data.org is a good tool for users and researchers to identify appropriate research data repositories as per their research requirements. In Indian environment, a compatible National Research Data Policy is the need of the time to boost the management of research data. Registry for Research Data Repositories is a crucial tool to discover specific information in specific domain. Also, Research Data Repositories in India have not been studied. Re3data.org registry and status of Indian research data repositories both discussed in this study.Keywords: research data, research data repositories, research data registry, re3data.org
Procedia PDF Downloads 32641771 Fault Tree Analysis (FTA) of CNC Turning Center
Authors: R. B. Patil, B. S. Kothavale, L. Y. Waghmode
Abstract:
Today, the CNC turning center becomes an important machine tool for manufacturing industry worldwide. However, as the breakdown of a single CNC turning center may result in the production of an entire plant being halted. For this reason, operations and preventive maintenance have to be minimized to ensure availability of the system. Indeed, improving the availability of the CNC turning center as a whole, objectively leads to a substantial reduction in production loss, operating, maintenance and support cost. In this paper, fault tree analysis (FTA) method is used for reliability analysis of CNC turning center. The major faults associated with the system and the causes for the faults are presented graphically. Boolean algebra is used for evaluating fault tree (FT) diagram and for deriving governing reliability model for CNC turning center. Failure data over a period of six years has been collected and used for evaluating the model. Qualitative and quantitative analysis is also carried out to identify critical sub-systems and components of CNC turning center. It is found that, at the end of the warranty period (one year), the reliability of the CNC turning center as a whole is around 0.61628.Keywords: fault tree analysis (FTA), reliability analysis, risk assessment, hazard analysis
Procedia PDF Downloads 41641770 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.Keywords: anomaly detection, autoencoder, data centers, deep learning
Procedia PDF Downloads 19441769 Capturing Public Voices: The Role of Social Media in Heritage Management
Authors: Mahda Foroughi, Bruno de Anderade, Ana Pereira Roders
Abstract:
Social media platforms have been increasingly used by locals and tourists to express their opinions about buildings, cities, and built heritage in particular. Most recently, scholars have been using social media to conduct innovative research on built heritage and heritage management. Still, the application of artificial intelligence (AI) methods to analyze social media data for heritage management is seldom explored. This paper investigates the potential of short texts (sentences and hashtags) shared through social media as a data source and artificial intelligence methods for data analysis for revealing the cultural significance (values and attributes) of built heritage. The city of Yazd, Iran, was taken as a case study, with a particular focus on windcatchers, key attributes conveying outstanding universal values, as inscribed on the UNESCO World Heritage List. This paper has three subsequent phases: 1) state of the art on the intersection of public participation in heritage management and social media research; 2) methodology of data collection and data analysis related to coding people's voices from Instagram and Twitter into values of windcatchers over the last ten-years; 3) preliminary findings on the comparison between opinions of locals and tourists, sentiment analysis, and its association with the values and attributes of windcatchers. Results indicate that the age value is recognized as the most important value by all interest groups, while the political value is the least acknowledged. Besides, the negative sentiments are scarcely reflected (e.g., critiques) in social media. Results confirm the potential of social media for heritage management in terms of (de)coding and measuring the cultural significance of built heritage for windcatchers in Yazd. The methodology developed in this paper can be applied to other attributes in Yazd and also to other case studies.Keywords: social media, artificial intelligence, public participation, cultural significance, heritage, sentiment analysis
Procedia PDF Downloads 11741768 Performance Evaluation and Planning for Road Safety Measures Using Data Envelopment Analysis and Fuzzy Decision Making
Authors: Hamid Reza Behnood, Esmaeel Ayati, Tom Brijs, Mohammadali Pirayesh Neghab
Abstract:
Investment projects in road safety planning can benefit from an effectiveness evaluation regarding their expected safety outcomes. The objective of this study is to develop a decision support system (DSS) to support policymakers in taking the right choice in road safety planning based on the efficiency of previously implemented safety measures in a set of regions in Iran. The measures considered for each region in the study include performance indicators about (1) police operations, (2) treated black spots, (3) freeway and highway facility supplies, (4) speed control cameras, (5) emergency medical services, and (6) road lighting projects. To this end, inefficiency measure is calculated, defined by the proportion of fatality rates in relation to the combined measure of road safety performance indicators (i.e., road safety measures) which should be minimized. The relative inefficiency for each region is modeled by the Data Envelopment Analysis (DEA) technique. In a next step, a fuzzy decision-making system is constructed to convert the information obtained from the DEA analysis into a rule-based system that can be used by policy makers to evaluate the expected outcomes of certain alternative investment strategies in road safety.Keywords: performance indicators, road safety, decision support system, data envelopment analysis, fuzzy reasoning
Procedia PDF Downloads 35441767 Design and Development of a Computerized Medical Record System for Hospitals in Remote Areas
Authors: Grace Omowunmi Soyebi
Abstract:
A computerized medical record system is a collection of medical information about a person that is stored on a computer. One principal problem of most hospitals in rural areas is using the file management system for keeping records. A lot of time is wasted when a patient visits the hospital, probably in an emergency, and the nurse or attendant has to search through voluminous files before the patient's file can be retrieved, this may cause an unexpected to happen to the patient. This Data Mining application is to be designed using a Structured System Analysis and design method which will help in a well-articulated analysis of the existing file management system, feasibility study, and proper documentation of the Design and Implementation of a Computerized medical record system. This Computerized system will replace the file management system and help to quickly retrieve a patient's record with increased data security, access clinical records for decision-making, and reduce the time range at which a patient gets attended to.Keywords: programming, computing, data, innovation
Procedia PDF Downloads 12041766 Using HABIT to Establish the Chemicals Analysis Methodology for Maanshan Nuclear Power Plant
Authors: J. R. Wang, S. W. Chen, Y. Chiang, W. S. Hsu, J. H. Yang, Y. S. Tseng, C. Shih
Abstract:
In this research, the HABIT analysis methodology was established for Maanshan nuclear power plant (NPP). The Final Safety Analysis Report (FSAR), reports, and other data were used in this study. To evaluate the control room habitability under the CO2 storage burst, the HABIT methodology was used to perform this analysis. The HABIT result was below the R.G. 1.78 failure criteria. This indicates that Maanshan NPP habitability can be maintained. Additionally, the sensitivity study of the parameters (wind speed, atmospheric stability classification, air temperature, and control room intake flow rate) was also performed in this research.Keywords: PWR, HABIT, Habitability, Maanshan
Procedia PDF Downloads 44641765 A Study of Cloud Computing Solution for Transportation Big Data Processing
Authors: Ilgin Gökaşar, Saman Ghaffarian
Abstract:
The need for fast processed big data of transportation ridership (eg., smartcard data) and traffic operation (e.g., traffic detectors data) which requires a lot of computational power is incontrovertible in Intelligent Transportation Systems. Nowadays cloud computing is one of the important subjects and popular information technology solution for data processing. It enables users to process enormous measure of data without having their own particular computing power. Thus, it can also be a good selection for transportation big data processing as well. This paper intends to examine how the cloud computing can enhance transportation big data process with contrasting its advantages and disadvantages, and discussing cloud computing features.Keywords: big data, cloud computing, Intelligent Transportation Systems, ITS, traffic data processing
Procedia PDF Downloads 470