Search results for: panel data analysis
41448 Anomaly Detection of Log Analysis using Data Visualization Techniques for Digital Forensics Audit and Investigation
Authors: Mohamed Fadzlee Sulaiman, Zainurrasyid Abdullah, Mohd Zabri Adil Talib, Aswami Fadillah Mohd Ariffin
Abstract:
In common digital forensics cases, investigation may rely on the analysis conducted on specific and relevant exhibits involved. Usually the investigation officer may define and advise digital forensic analyst about the goals and objectives to be achieved in reconstructing the trail of evidence while maintaining the specific scope of investigation. With the technology growth, people are starting to realize the importance of cyber security to their organization and this new perspective creates awareness that digital forensics auditing must come in place in order to measure possible threat or attack to their cyber-infrastructure. Instead of performing investigation on incident basis, auditing may broaden the scope of investigation to the level of anomaly detection in daily operation of organization’s cyber space. While handling a huge amount of data such as log files, performing digital forensics audit for large organization proven to be onerous task for the analyst either to analyze the huge files or to translate the findings in a way where the stakeholder can clearly understand. Data visualization can be emphasized in conducting digital forensic audit and investigation to resolve both needs. This study will identify the important factors that should be considered to perform data visualization techniques in order to detect anomaly that meet the digital forensic audit and investigation objectives.Keywords: digital forensic, data visualization, anomaly detection , log analysis, forensic audit, visualization techniques
Procedia PDF Downloads 28741447 Developing Logistics Indices for Turkey as an an Indicator of Economic Activity
Authors: Gizem İntepe, Eti Mizrahi
Abstract:
Investment and financing decisions are influenced by various economic features. Detailed analysis should be conducted in order to make decisions not only by companies but also by governments. Such analysis can be conducted either at the company level or on a sectoral basis to reduce risks and to maximize profits. Sectoral disaggregation caused by seasonality effects, subventions, data advantages or disadvantages may appear in sectors behaving parallel to BIST (Borsa Istanbul stock exchange) Index. Proposed logistic indices could serve market needs as a decision parameter in sectoral basis and also helps forecasting activities in import export volume changes. Also it is an indicator of logistic activity, which is also a sign of economic mobility at the national level. Publicly available data from “Ministry of Transport, Maritime Affairs and Communications” and “Turkish Statistical Institute” is utilized to obtain five logistics indices namely as; exLogistic, imLogistic, fLogistic, dLogistic and cLogistic index. Then, efficiency and reliability of these indices are tested.Keywords: economic activity, export trade data, import trade data, logistics indices
Procedia PDF Downloads 33741446 Blood Glucose Measurement and Analysis: Methodology
Authors: I. M. Abd Rahim, H. Abdul Rahim, R. Ghazali
Abstract:
There is numerous non-invasive blood glucose measurement technique developed by researchers, and near infrared (NIR) is the potential technique nowadays. However, there are some disagreements on the optimal wavelength range that is suitable to be used as the reference of the glucose substance in the blood. This paper focuses on the experimental data collection technique and also the analysis method used to analyze the data gained from the experiment. The selection of suitable linear and non-linear model structure is essential in prediction system, as the system developed need to be conceivably accurate.Keywords: linear, near-infrared (NIR), non-invasive, non-linear, prediction system
Procedia PDF Downloads 46041445 Diagnostic Clinical Skills in Cardiology: Improving Learning and Performance with Hybrid Simulation, Scripted Histories, Wearable Technology, and Quantitative Grading – The Assimilate Excellence Study
Authors: Daly M. J, Condron C, Mulhall C, Eppich W, O'Neill J.
Abstract:
Introduction: In contemporary clinical cardiology, comprehensive and holistic bedside evaluation including accurate cardiac auscultation is in decline despite having positive effects on patients and their outcomes. Methods: Scripted histories and scoring checklists for three clinical scenarios in cardiology were co-created and refined through iterative consensus by a panel of clinical experts; these were then paired with recordings of auscultatory findings from three actual patients with known valvular heart disease. A wearable vest with embedded pressure-sensitive panel speakers was developed to transmit these recordings when examined at the standard auscultation points. RCSI medical students volunteered for a series of three formative long case examinations in cardiology (LC1 – LC3) using this hybrid simulation. Participants were randomised into two groups: Group 1 received individual teaching from an expert trainer between LC1 and LC2; Group 2 received the same intervention between LC2 and LC3. Each participant’s long case examination performance was recorded and blindly scored by two peer participants and two RCSI examiners. Results: Sixty-eight participants were included in the study (age 27.6 ± 0.1 years; 74% female) and randomised into two groups; there were no significant differences in baseline characteristics between groups. Overall, the median total faculty examiner score was 39.8% (35.8 – 44.6%) in LC1 and increased to 63.3% (56.9 – 66.4%) in LC3, with those in Group 1 showing a greater improvement in LC2 total score than that observed in Group 2 (p < .001). Using the novel checklist, intraclass correlation coefficients (ICC) were excellent between examiners in all cases: ICC .994 – .997 (p < .001); correlation between peers and examiners improved in LC2 following peer grading of LC1 performances: ICC .857 – .867 (p < .001). Conclusion: Hybrid simulation and quantitative grading improve learning, standardisation of assessment, and direct comparisons of both performance and acumen in clinical cardiology.Keywords: cardiology, clinical skills, long case examination, hybrid simulation, checklist
Procedia PDF Downloads 11041444 Status and Results from EXO-200
Authors: Ryan Maclellan
Abstract:
EXO-200 has provided one of the most sensitive searches for neutrinoless double-beta decay utilizing 175 kg of enriched liquid xenon in an ultra-low background time projection chamber. This detector has demonstrated excellent energy resolution and background rejection capabilities. Using the first two years of data, EXO-200 has set a limit of 1.1x10^25 years at 90% C.L. on the neutrinoless double-beta decay half-life of Xe-136. The experiment has experienced a brief hiatus in data taking during a temporary shutdown of its host facility: the Waste Isolation Pilot Plant. EXO-200 expects to resume data taking in earnest this fall with upgraded detector electronics. Results from the analysis of EXO-200 data and an update on the current status of EXO-200 will be presented.Keywords: double-beta, Majorana, neutrino, neutrinoless
Procedia PDF Downloads 41441443 Emerging Research Trends in Routing Protocol for Wireless Sensor Network
Authors: Subhra Prosun Paul, Shruti Aggarwal
Abstract:
Now a days Routing Protocol in Wireless Sensor Network has become a promising technique in the different fields of the latest computer technology. Routing in Wireless Sensor Network is a demanding task due to the different design issues of all sensor nodes. Network architecture, no of nodes, traffic of routing, the capacity of each sensor node, network consistency, service value are the important factor for the design and analysis of Routing Protocol in Wireless Sensor Network. Additionally, internal energy, the distance between nodes, the load of sensor nodes play a significant role in the efficient routing protocol. In this paper, our intention is to analyze the research trends in different routing protocols of Wireless Sensor Network in terms of different parameters. In order to explain the research trends on Routing Protocol in Wireless Sensor Network, different data related to this research topic are analyzed with the help of Web of Science and Scopus databases. The data analysis is performed from global perspective-taking different parameters like author, source, document, country, organization, keyword, year, and a number of the publication. Different types of experiments are also performed, which help us to evaluate the recent research tendency in the Routing Protocol of Wireless Sensor Network. In order to do this, we have used Web of Science and Scopus databases separately for data analysis. We have observed that there has been a tremendous development of research on this topic in the last few years as it has become a very popular topic day by day.Keywords: analysis, routing protocol, research trends, wireless sensor network
Procedia PDF Downloads 21541442 A Survey of Sentiment Analysis Based on Deep Learning
Authors: Pingping Lin, Xudong Luo, Yifan Fan
Abstract:
Sentiment analysis is a very active research topic. Every day, Facebook, Twitter, Weibo, and other social media, as well as significant e-commerce websites, generate a massive amount of comments, which can be used to analyse peoples opinions or emotions. The existing methods for sentiment analysis are based mainly on sentiment dictionaries, machine learning, and deep learning. The first two kinds of methods rely on heavily sentiment dictionaries or large amounts of labelled data. The third one overcomes these two problems. So, in this paper, we focus on the third one. Specifically, we survey various sentiment analysis methods based on convolutional neural network, recurrent neural network, long short-term memory, deep neural network, deep belief network, and memory network. We compare their futures, advantages, and disadvantages. Also, we point out the main problems of these methods, which may be worthy of careful studies in the future. Finally, we also examine the application of deep learning in multimodal sentiment analysis and aspect-level sentiment analysis.Keywords: document analysis, deep learning, multimodal sentiment analysis, natural language processing
Procedia PDF Downloads 16441441 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 13441440 The Quality of Food and Drink Product Labels Translation from Indonesian into English
Authors: Rudi Hartono, Bambang Purwanto
Abstract:
The translation quality of food and drink labels from Indonesian into English is poor because the translation is not accurate, less natural, and difficult to read. The label translation can be found in some cans packages of food and drink products produced and marketed by several companies in Indonesia. If this problem is left unchecked, it will lead to a misunderstanding on the translation results and make consumers confused. This study was conducted to analyze the translation errors on food and drink products labels and formulate the solution for the better translation quality. The research design was the evaluation research with a holistic criticism approach. The data used were words, phrases, and sentences translated from Indonesian to English language printed on food and drink product labels. The data were processed by using Interactive Model Analysis that carried out three main steps: collecting, classifying, and verifying data. Furthermore, the data were analyzed by using content analysis to view the accuracy, naturalness, and readability of translation. The results showed that the translation quality of food and drink product labels from Indonesian to English has the level of accuracy (60%), level of naturalness (50%), and level readability (60%). This fact needs a help to create an effective strategy for translating food and drink product labels later.Keywords: translation quality, food and drink product labels, a holistic criticism approach, interactive model, content analysis
Procedia PDF Downloads 37341439 Crossing Multi-Source Climate Data to Estimate the Effects of Climate Change on Evapotranspiration Data: Application to the French Central Region
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Climatic factors are the subject of considerable research, both methodologically and instrumentally. Under the effect of climate change, the approach to climate parameters with precision remains one of the main objectives of the scientific community. This is from the perspective of assessing climate change and its repercussions on humans and the environment. However, many regions of the world suffer from a severe lack of reliable instruments that can make up for this deficit. Alternatively, the use of empirical methods becomes the only way to assess certain parameters that can act as climate indicators. Several scientific methods are used for the evaluation of evapotranspiration which leads to its evaluation either directly at the level of the climatic stations or by empirical methods. All these methods make a point approach and, in no case, allow the spatial variation of this parameter. We, therefore, propose in this paper the use of three sources of information (network of weather stations of Meteo France, World Databases, and Moodis satellite images) to evaluate spatial evapotranspiration (ETP) using the Turc method. This first step will reflect the degree of relevance of the indirect (satellite) methods and their generalization to sites without stations. The spatial variation representation of this parameter using the geographical information system (GIS) accounts for the heterogeneity of the behaviour of this parameter. This heterogeneity is due to the influence of site morphological factors and will make it possible to appreciate the role of certain topographic and hydrological parameters. A phase of predicting the evolution over the medium and long term of evapotranspiration under the effect of climate change by the application of the Intergovernmental Panel on Climate Change (IPCC) scenarios gives a realistic overview as to the contribution of aquatic systems to the scale of the region.Keywords: climate change, ETP, MODIS, GIEC scenarios
Procedia PDF Downloads 10041438 PDDA: Priority-Based, Dynamic Data Aggregation Approach for Sensor-Based Big Data Framework
Authors: Lutful Karim, Mohammed S. Al-kahtani
Abstract:
Sensors are being used in various applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control and hence, play the vital role in the growth of big data. However, sensors collect redundant data. Thus, aggregating and filtering sensors data are significantly important to design an efficient big data framework. Current researches do not focus on aggregating and filtering data at multiple layers of sensor-based big data framework. Thus, this paper introduces (i) three layers data aggregation and framework for big data and (ii) a priority-based, dynamic data aggregation scheme (PDDA) for the lowest layer at sensors. Simulation results show that the PDDA outperforms existing tree and cluster-based data aggregation scheme in terms of overall network energy consumptions and end-to-end data transmission delay.Keywords: big data, clustering, tree topology, data aggregation, sensor networks
Procedia PDF Downloads 34641437 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data
Authors: K. Sathishkumar, V. Thiagarasu
Abstract:
Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.Keywords: microarray technology, gene expression data, clustering, gene Selection
Procedia PDF Downloads 32341436 Effect of Sustainability Accounting Disclosure on Financial Performance of Listed Brewery Firms in Nigeria
Authors: Patricia Chinyere Oranefo
Abstract:
This study examined the effect of sustainability accounting disclosure on financial performance of listed Brewery firms in Nigeria. The dearth of empirical evidence and literature on “governance disclosure” as one of the explanatory variables of sustainability accounting reporting were the major motivation for this study. The main objective was to ascertain the effect of sustainability accounting disclosure on financial performance of listed Brewery firms in Nigeria. An ex–post facto research design approach was adopted for the study. The population of this study comprises of five (5) Brewery firms quoted on the floor of the Nigeria exchange group (NSX) and the sample size of four (4) listed firms was drawn using purposive sampling method. Secondary data were carefully sourced from the financial statement/annual reports and sustainability reports from 2012 to 2021 of the Brewery firms quoted on the Nigeria exchange group (NSX). Panel regression analysis by aid of E-views 10.0 software was used to test for statistical significance of the effect of sustainability accounting disclosure on financial performance of listed Brewery firms in Nigeria. The results showed that economic sustainability disclosure indexes do not significantly affect return on asset of listed Brewery firms in Nigeria. The findings further revealed that environmental sustainability disclosure indexes do not significantly affect return on equity of listed Brewery firms in Nigeria. More so, results showed that Social Sustainability disclosure indexes significantly affect Net Profit Margin of listed Brewery firms in Nigeria. Finally, the result established also that governance sustainability disclosure indexes do not significantly affect Earnings per share of listed Brewery firms in Nigeria. Consequent upon the findings, this study recommended among others; that managers of Brewers in Nigeria should improve and sustain full disclosure practices on economic, environmental, social and governance disclosures following the guidelines of the Global Reporting Index (GRI) as they are capable of exerting significant effect on financial performance of firms in Nigeria.Keywords: sustainability, accounting, disclosure, financial performance
Procedia PDF Downloads 5941435 Repeatable Scalable Business Models: Can Innovation Drive an Entrepreneurs Un-Validated Business Model?
Authors: Paul Ojeaga
Abstract:
Can the level of innovation use drive un-validated business models across regions? To what extent does industrial sector attractiveness drive firm’s success across regions at the time of start-up? This study examines the role of innovation on start-up success in six regions of the world (namely Sub Saharan Africa, the Middle East and North Africa, Latin America, South East Asia Pacific, the European Union and the United States representing North America) using macroeconomic variables. While there have been studies using firm level data, results from such studies are not suitable for national policy decisions. The need to drive a regional innovation policy also begs for an answer, therefore providing room for this study. Results using dynamic panel estimation show that innovation counts in the early infancy stage of new business life cycle. The results are robust even after controlling for time fixed effects and the study present variance-covariance estimation robust standard errors.Keywords: industrial economics, un-validated business models, scalable models, entrepreneurship
Procedia PDF Downloads 28141434 Application of Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM) Database in Nursing Health Problems with Prostate Cancer-a Pilot Study
Authors: Hung Lin-Zin, Lai Mei-Yen
Abstract:
Prostate cancer is the most commonly diagnosed male cancer in the U.S. The prevalence is around 1 in 8. The etiology of prostate cancer is still unknown, but some predisposing factors, such as age, black race, family history, and obesity, may increase the risk of the disease. In 2020, a total of 7,178 Taiwanese people were nearly diagnosed with prostate cancer, accounting for 5.88% of all cancer cases, and the incidence rate ranked fifth among men. In that year, the total number of deaths from prostate cancer was 1,730, accounting for 3.45% of all cancer deaths, and the death rate ranked 6th among men, accounting for 94.34% of the cases of male reproductive organs. Looking for domestic and foreign literature on the use of OMOP (Observational Medical Outcomes Partnership, hereinafter referred to as OMOP) database analysis, there are currently nearly a hundred literature published related to nursing-related health problems and nursing measures built in the OMOP general data model database of medical institutions are extremely rare. The OMOP common data model construction analysis platform is a system developed by the FDA in 2007, using a common data model (common data model, CDM) to analyze and monitor healthcare data. It is important to build up relevant nursing information from the OMOP- CDM database to assist our daily practice. Therefore, we choose prostate cancer patients who are our popular care objects and use the OMOP- CDM database to explore the common associated health problems. With the assistance of OMOP-CDM database analysis, we can expect early diagnosis and prevention of prostate cancer patients' comorbidities to improve patient care.Keywords: OMOP, nursing diagnosis, health problem, prostate cancer
Procedia PDF Downloads 6941433 Assessment of Routine Health Information System (RHIS) Quality Assurance Practices in Tarkwa Sub-Municipal Health Directorate, Ghana
Authors: Richard Okyere Boadu, Judith Obiri-Yeboah, Kwame Adu Okyere Boadu, Nathan Kumasenu Mensah, Grace Amoh-Agyei
Abstract:
Routine health information system (RHIS) quality assurance has become an important issue, not only because of its significance in promoting a high standard of patient care but also because of its impact on government budgets for the maintenance of health services. A routine health information system comprises healthcare data collection, compilation, storage, analysis, report generation, and dissemination on a routine basis in various healthcare settings. The data from RHIS give a representation of health status, health services, and health resources. The sources of RHIS data are normally individual health records, records of services delivered, and records of health resources. Using reliable information from routine health information systems is fundamental in the healthcare delivery system. Quality assurance practices are measures that are put in place to ensure the health data that are collected meet required quality standards. Routine health information system quality assurance practices ensure that data that are generated from the system are fit for use. This study considered quality assurance practices in the RHIS processes. Methods: A cross-sectional study was conducted in eight health facilities in Tarkwa Sub-Municipal Health Service in the western region of Ghana. The study involved routine quality assurance practices among the 90 health staff and management selected from facilities in Tarkwa Sub-Municipal who collected or used data routinely from 24th December 2019 to 20th January 2020. Results: Generally, Tarkwa Sub-Municipal health service appears to practice quality assurance during data collection, compilation, storage, analysis and dissemination. The results show some achievement in quality control performance in report dissemination (77.6%), data analysis (68.0%), data compilation (67.4%), report compilation (66.3%), data storage (66.3%) and collection (61.1%). Conclusions: Even though the Tarkwa Sub-Municipal Health Directorate engages in some control measures to ensure data quality, there is a need to strengthen the process to achieve the targeted percentage of performance (90.0%). There was a significant shortfall in quality assurance practices performance, especially during data collection, with respect to the expected performance.Keywords: quality assurance practices, assessment of routine health information system quality, routine health information system, data quality
Procedia PDF Downloads 7941432 Analysis of the 2023 Karnataka State Elections Using Online Sentiment
Authors: Pranav Gunhal
Abstract:
This paper presents an analysis of sentiment on Twitter towards the Karnataka elections held in 2023, utilizing transformer-based models specifically designed for sentiment analysis in Indic languages. Through an innovative data collection approach involving a combination of novel methods of data augmentation, online data preceding the election was analyzed. The study focuses on sentiment classification, effectively distinguishing between positive, negative, and neutral posts while specifically targeting the sentiment regarding the loss of the Bharatiya Janata Party (BJP) or the win of the Indian National Congress (INC). Leveraging high-performing transformer architectures, specifically IndicBERT, coupled with specifically fine-tuned hyperparameters, the AI models employed in this study achieved remarkable accuracy in predicting the INC’s victory in the election. The findings shed new light on the potential of cutting-edge transformer-based models in capturing and analyzing sentiment dynamics within the Indian political landscape. The implications of this research are far-reaching, providing invaluable insights to political parties for informed decision-making and strategic planning in preparation for the forthcoming 2024 Lok Sabha elections in the nation.Keywords: sentiment analysis, twitter, Karnataka elections, congress, BJP, transformers, Indic languages, AI, novel architectures, IndicBERT, lok sabha elections
Procedia PDF Downloads 8441431 Performance Study of PV Power plants in Algeria
Authors: Razika Ihaddadene, Nabila Ihaddadene
Abstract:
This paper aims to highlight the importance of the application of the IEC 61724 standard in the study of the performance analysis of photovoltaic power plants on a monthly and annual scale. Likewise, the comparison of two photovoltaic power plants with two different climates was carried out in order to determine the effect of climatic parameters on the analysis of photovoltaic performances. All data from the Ain Skhouna and Adrar photovoltaic power plants for 2018 and the data from the Saida1 field for one month in 2019 were used. The results of the performance analysis according to the indicated standard show that the Saida PV power plant performs better than the Adrar PV power plant, which is due to the effect of increasing the ambient temperature. Increasing ambient temperature increases losses decreases system efficiency and performance ratio. It presents a key element in the proper functioning of PV plants.Keywords: pv power plants, IEC 61724 norm, grid connected pv, algeria
Procedia PDF Downloads 7741430 Emerging Trends of Geographic Information Systems in Built Environment Education: A Bibliometric Review Analysis
Authors: Kiara Lawrence, Robynne Hansmann, Clive Greentsone
Abstract:
Geographic Information Systems (GIS) are used to store, analyze, visualize, capture and monitor geographic data. Built environment professionals as well as urban planners specifically, need to possess GIS skills to effectively and efficiently plan spaces. GIS application extends beyond the production of map artifacts and can be applied to relate to spatially referenced, real time data to support spatial visualization, analysis, community engagement, scenarios, and so forth. Though GIS has been used in the built environment for a few decades, its use in education has not been researched enough to draw conclusions on the trends in the last 20 years. The study looks to discover current and emerging trends of GIS in built environment education. A bibliometric review analysis methodology was carried out through exporting documents from Scopus and Web of Science using keywords around "Geographic information systems" OR "GIS" AND "built environment" OR “geography” OR "architecture" OR "quantity surveying" OR "construction" OR "urban planning" OR "town planning" AND “education” between the years 1994 to 2024. A total of 564 documents were identified and exported. The data was then analyzed using VosViewer software to generate network analysis and visualization maps on the co-occurrence of keywords, co-citation of documents and countries and co-author network analysis. By analyzing each aspect of the data, deeper insight of GIS within education can be understood. Preliminary results from Scopus indicate that GIS research focusing on built environment education seems to have peaked prior to 2014 with much focus on remote sensing, demography, land use, engineering education and so forth. This invaluable data can help in understanding and implementing GIS in built environment education in ways that are foundational and innovative to ensure that students are equipped with sufficient knowledge and skills to carry out tasks in their respective fields.Keywords: architecture, built environment, construction, education, geography, geographic information systems, quantity surveying, town planning, urban planning
Procedia PDF Downloads 1541429 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution
Authors: Najrullah Khan, Athar Ali Khan
Abstract:
The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation
Procedia PDF Downloads 53541428 Network Analysis of Genes Involved in the Biosynthesis of Medicinally Important Naphthodianthrone Derivatives of Hypericum perforatum
Authors: Nafiseh Noormohammadi, Ahmad Sobhani Najafabadi
Abstract:
Hypericins (hypericin and pseudohypericin) are natural napthodianthrone derivatives produced by Hypericum perforatum (St. John’s Wort), which have many medicinal properties such as antitumor, antineoplastic, antiviral, and antidepressant activities. Production and accumulation of hypericin in the plant are influenced by both genetic and environmental conditions. Despite the existence of different high-throughput data on the plant, genetic dimensions of hypericin biosynthesis have not yet been completely understood. In this research, 21 high-quality RNA-seq data on different parts of the plant were integrated into metabolic data to reconstruct a coexpression network. Results showed that a cluster of 30 transcripts was correlated with total hypericin. The identified transcripts were divided into three main groups based on their functions, including hypericin biosynthesis genes, transporters, detoxification genes, and transcription factors (TFs). In the biosynthetic group, different isoforms of polyketide synthase (PKSs) and phenolic oxidative coupling proteins (POCPs) were identified. Phylogenetic analysis of protein sequences integrated into gene expression analysis showed that some of the POCPs seem to be very important in the biosynthetic pathway of hypericin. In the TFs group, six TFs were correlated with total hypericin. qPCR analysis of these six TFs confirmed that three of them were highly correlated. The identified genes in this research are a rich resource for further studies on the molecular breeding of H. perforatum in order to obtain varieties with high hypericin production.Keywords: hypericin, St. John’s Wort, data mining, transcription factors, secondary metabolites
Procedia PDF Downloads 9341427 An Investigation into the Views of Distant Science Education Students Regarding Teaching Laboratory Work Online
Authors: Abraham Motlhabane
Abstract:
This research analysed the written views of science education students regarding the teaching of laboratory work using the online mode. The research adopted the qualitative methodology. The qualitative research was aimed at investigating small and distinct groups normally regarded as a single-site study. Qualitative research was used to describe and analyze the phenomena from the student’s perspective. This means the research began with assumptions of the world view that use theoretical lenses of research problems inquiring into the meaning of individual students. The research was conducted with three groups of students studying for Postgraduate Certificate in Education, Bachelor of Education and honors Bachelor of Education respectively. In each of the study programmes, the science education module is compulsory. Five science education students from each study programme were purposively selected to participate in this research. Therefore, 15 students participated in the research. In order to analysis the data, the data were first printed and hard copies were used in the analysis. The data was read several times and key concepts and ideas were highlighted. Themes and patterns were identified to describe the data. Coding as a process of organising and sorting data was used. The findings of the study are very diverse; some students are in favour of online laboratory whereas other students argue that science can only be learnt through hands-on experimentation.Keywords: online learning, laboratory work, views, perceptions
Procedia PDF Downloads 14441426 Risk Factors’ Analysis on Shanghai Carbon Trading
Authors: Zhaojun Wang, Zongdi Sun, Zhiyuan Liu
Abstract:
First of all, the carbon trading price and trading volume in Shanghai are transformed by Fourier transform, and the frequency response diagram is obtained. Then, the frequency response diagram is analyzed and the Blackman filter is designed. The Blackman filter is used to filter, and the carbon trading time domain and frequency response diagram are obtained. After wavelet analysis, the carbon trading data were processed; respectively, we got the average value for each 5 days, 10 days, 20 days, 30 days, and 60 days. Finally, the data are used as input of the Back Propagation Neural Network model for prediction.Keywords: Shanghai carbon trading, carbon trading price, carbon trading volume, wavelet analysis, BP neural network model
Procedia PDF Downloads 39141425 An Exploratory Sequential Design: A Mixed Methods Model for the Statistics Learning Assessment with a Bayesian Network Representation
Authors: Zhidong Zhang
Abstract:
This study established a mixed method model in assessing statistics learning with Bayesian network models. There are three variants in exploratory sequential designs. There are three linked steps in one of the designs: qualitative data collection and analysis, quantitative measure, instrument, intervention, and quantitative data collection analysis. The study used a scoring model of analysis of variance (ANOVA) as a content domain. The research study is to examine students’ learning in both semantic and performance aspects at fine grain level. The ANOVA score model, y = α+ βx1 + γx1+ ε, as a cognitive task to collect data during the student learning process. When the learning processes were decomposed into multiple steps in both semantic and performance aspects, a hierarchical Bayesian network was established. This is a theory-driven process. The hierarchical structure was gained based on qualitative cognitive analysis. The data from students’ ANOVA score model learning was used to give evidence to the hierarchical Bayesian network model from the evidential variables. Finally, the assessment results of students’ ANOVA score model learning were reported. Briefly, this was a mixed method research design applied to statistics learning assessment. The mixed methods designs expanded more possibilities for researchers to establish advanced quantitative models initially with a theory-driven qualitative mode.Keywords: exploratory sequential design, ANOVA score model, Bayesian network model, mixed methods research design, cognitive analysis
Procedia PDF Downloads 17841424 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection
Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine
Abstract:
Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine
Procedia PDF Downloads 26741423 Investigating Dynamic Transition Process of Issues Using Unstructured Text Analysis
Authors: Myungsu Lim, William Xiu Shun Wong, Yoonjin Hyun, Chen Liu, Seongi Choi, Dasom Kim, Namgyu Kim
Abstract:
The amount of real-time data generated through various mass media has been increasing rapidly. In this study, we had performed topic analysis by using the unstructured text data that is distributed through news article. As one of the most prevalent applications of topic analysis, the issue tracking technique investigates the changes of the social issues that identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has limitation that it cannot discover dynamic mutation process of complex social issues. The purpose of this study is to overcome the limitations of the existing issue tracking method. We first derived core issues of each period, and then discover the dynamic mutation process of various issues. In this study, we further analyze the mutation process from the perspective of the issues categories, in order to figure out the pattern of issue flow, including the frequency and reliability of the pattern. In other words, this study allows us to understand the components of the complex issues by tracking the dynamic history of issues. This methodology can facilitate a clearer understanding of complex social phenomena by providing mutation history and related category information of the phenomena.Keywords: Data Mining, Issue Tracking, Text Mining, topic Analysis, topic Detection, Trend Detection
Procedia PDF Downloads 40841422 A Social Cognitive Investigation in the Context of Vocational Training Performance of People with Disabilities
Authors: Majid A. AlSayari
Abstract:
The study reported here investigated social cognitive theory (SCT) in the context of Vocational Rehab (VR) for people with disabilities. The prime purpose was to increase knowledge of VR phenomena and make recommendations for improving VR services. The sample consisted of 242 persons with Spinal Cord Injuries (SCI) who completed questionnaires. A further 32 participants were Trainers. Analysis of questionnaire data was carried out using factor analysis, multiple regression analysis, and thematic analysis. The analysis suggested that, in motivational terms, and consistent with research carried out in other academic contexts, self-efficacy was the best predictor of VR performance. The author concludes that that VR self-efficacy predicted VR training performance.Keywords: people with physical disabilities, social cognitive theory, self-efficacy, vocational training
Procedia PDF Downloads 31441421 Assessment of Politeness Behavior on Communicating: Validation of Scale through Exploratory Factor Analysis and Confirmatory Factor Analysis
Authors: Abdullah Pandang, Mantasiah Rivai, Nur Fadhilah Umar, Azam Arifyadi
Abstract:
This study aims to measure the validity of the politeness behaviour scale and obtain a model that fits the scale. The researcher developed the Politeness Behavior on Communicating (PBC) scale. The research method uses descriptive quantitative by developing the PBC scale. The population in this study were students in three provinces, namely South Sulawesi, West Sulawesi, and Central Sulawesi, recorded in the 2022/2023 academic year. The sampling technique used stratified random sampling by determining the number of samples using the Slovin formula. The sample of this research is 1200 students. This research instrument uses the PBC scale, which consists of 5 (five) indicators: self-regulation of compensation behaviour, self-efficacy of compensation behaviour, fulfilment of social expectations, positive feedback, and no strings attached. The PBC scale consists of 34 statement items. The data analysis technique is divided into two types: the validity test on the correlated item values and the item reliability test referring to Cronbach's and McDonald's alpha standards using the JASP application. Furthermore, the data were analyzed using confirmatory factor analysis (CFA) and exploratory factor analysis (EFA). The results showed that the adaptation of the Politeness Behavior on Communicating (PBC) scale was on the Fit Index with a chi-square value (711,800/375), RMSEA (0.53), GFI (0.990), CFI (0.987), GFI (0.985).Keywords: polite behavior in communicating, positive communication, exploration factor analysis, confirmatory factor analysis
Procedia PDF Downloads 12441420 Genodata: The Human Genome Variation Using BigData
Authors: Surabhi Maiti, Prajakta Tamhankar, Prachi Uttam Mehta
Abstract:
Since the accomplishment of the Human Genome Project, there has been an unparalled escalation in the sequencing of genomic data. This project has been the first major vault in the field of medical research, especially in genomics. This project won accolades by using a concept called Bigdata which was earlier, extensively used to gain value for business. Bigdata makes use of data sets which are generally in the form of files of size terabytes, petabytes, or exabytes and these data sets were traditionally used and managed using excel sheets and RDBMS. The voluminous data made the process tedious and time consuming and hence a stronger framework called Hadoop was introduced in the field of genetic sciences to make data processing faster and efficient. This paper focuses on using SPARK which is gaining momentum with the advancement of BigData technologies. Cloud Storage is an effective medium for storage of large data sets which is generated from the genetic research and the resultant sets produced from SPARK analysis.Keywords: human genome project, Bigdata, genomic data, SPARK, cloud storage, Hadoop
Procedia PDF Downloads 25941419 Impact of Hashtags in Tweets Regarding COVID-19 on the Psyche of Pakistanis: A Critical Discourse Analytical Study
Authors: Muhammad Hamza
Abstract:
This study attempts to analyze the social media reports regarding Covid-19 that impacted the psyche of Pakistanis. This Study is delimited to hashtags from Tweets on a social media platform. During Covid-19, it has been observed that it affected the psychological conditions of Pakistanis. With the application of the three-dimensional model presented by Fairclough, together with a data analytic software “FireAnt” i.e., social media and data analysis toolkit, which is used to filter, identify, report and export data from social media accurately. A detailed and explicit exploration of the various hashtags by users from different fields was conducted. This study conducted a quantitative as well as qualitative methods of analysis. The study examined the perspectives of the Pakistanis behind the use of various hashtags with the lenses of Critical Discourse Analysis (CDA). While conducting this research, CDA was helpful to reveal the connection between the psyche of the people and the Covid-19 pandemic. It was found that how different Pakistanis used social media and how Covid-19 impacted their psyche. After collecting and analyzing the hashtags from twitter it was concluded that majority of people received negative impact from social media reports, while, some people used their hashtags positively and were found positive during Covid-19, and some people were found neutral.Keywords: Covid, Covid-19, psyche, Covid Pakistan
Procedia PDF Downloads 59