Search results for: R data science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26133

Search results for: R data science

24603 Determining the Awareness Level of Chefs and Students on Food Safety and Allergens in Kano State, Nigeria and Ankara City in Turkey

Authors: Balarabe Bilyaminu Ismail, Osman Cavus, Fügen Durlu Özkaya

Abstract:

This study is aimed at determining the level of awareness of chefs and students of food science and technology on food safety in general and allergens in particular. To get appropriate data, a questionnaire comprising of 19 questions covering many food safety issues and allergens in foods were used to collect information for the study through face to face interviews. Interviews were conducted for 284 people in Nigeria and Turkey. Sixty-eight percent of respondents from Turkey; 31.3% were students and 68.7% were chefs. Thirty-one percent of respondents from Nigeria include 33.7% students and 66.3% chefs. The result of the study indicated that most of the findings of scientific studies on food safety issues have not been directly applied by the people working in the food sector. Additionally, the knowledge level of the gastronomy and culinary arts students on food safety and allergens are significantly higher than the restaurant chefs that prepare the food and serve it to the public. The study, therefore, concluded that proper training of food business operators is critical to ensuring the safety of foods and control of allergens.

Keywords: allergens, food safety, questionnaire survey, training

Procedia PDF Downloads 344
24602 Finding the Free Stream Velocity Using Flow Generated Sound

Authors: Saeed Hosseini, Ali Reza Tahavvor

Abstract:

Sound processing is one the subjects that newly attracts a lot of researchers. It is efficient and usually less expensive than other methods. In this paper the flow generated sound is used to estimate the flow speed of free flows. Many sound samples are gathered. After analyzing the data, a parameter named wave power is chosen. For all samples, the wave power is calculated and averaged for each flow speed. A curve is fitted to the averaged data and a correlation between the wave power and flow speed is founded. Test data are used to validate the method and errors for all test data were under 10 percent. The speed of the flow can be estimated by calculating the wave power of the flow generated sound and using the proposed correlation.

Keywords: the flow generated sound, free stream, sound processing, speed, wave power

Procedia PDF Downloads 399
24601 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 271
24600 Efficiency of DMUs in Presence of New Inputs and Outputs in DEA

Authors: Esmat Noroozi, Elahe Sarfi, Farha Hosseinzadeh Lotfi

Abstract:

Examining the impacts of data modification is considered as sensitivity analysis. A lot of studies have considered the data modification of inputs and outputs in DEA. The issues which has not heretofore been considered in DEA sensitivity analysis is modification in the number of inputs and (or) outputs and determining the impacts of this modification in the status of efficiency of DMUs. This paper is going to present systems that show the impacts of adding one or multiple inputs or outputs on the status of efficiency of DMUs and furthermore a model is presented for recognizing the minimum number of inputs and (or) outputs from among specified inputs and outputs which can be added whereas an inefficient DMU will become efficient. Finally the presented systems and model have been utilized for a set of real data and the results have been reported.

Keywords: data envelopment analysis, efficiency, sensitivity analysis, input, out put

Procedia PDF Downloads 434
24599 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 115
24598 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-time

Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl

Abstract:

In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method Web-App auto-generated twin data structure replication. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi" has been developed. A special login form has been developed with a special instance of data validation; this verification process secures the web application from its early stages. The system has been tested and validated, up to 99% of SQLi attacks have been prevented.

Keywords: SQL injection, attacks, web application, accuracy, database

Procedia PDF Downloads 131
24597 Regression for Doubly Inflated Multivariate Poisson Distributions

Authors: Ishapathik Das, Sumen Sen, N. Rao Chaganty, Pooja Sengupta

Abstract:

Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.

Keywords: copula, Gaussian copula, multivariate distributions, inflated distributios

Procedia PDF Downloads 146
24596 An Exploratory Research of Human Character Analysis Based on Smart Watch Data: Distinguish the Drinking State from Normal State

Authors: Lu Zhao, Yanrong Kang, Lili Guo, Yuan Long, Guidong Xing

Abstract:

Smart watches, as a handy device with rich functionality, has become one of the most popular wearable devices all over the world. Among the various function, the most basic is health monitoring. The monitoring data can be provided as an effective evidence or a clue for the detection of crime cases. For instance, the step counting data can help to determine whether the watch wearer was quiet or moving during the given time period. There is, however, still quite few research on the analysis of human character based on these data. The purpose of this research is to analyze the health monitoring data to distinguish the drinking state from normal state. The analysis result may play a role in cases involving drinking, such as drunk driving. The experiment mainly focused on finding the figures of smart watch health monitoring data that change with drinking and figuring up the change scope. The chosen subjects are mostly in their 20s, each of whom had been wearing the same smart watch for a week. Each subject drank for several times during the week, and noted down the begin and end time point of the drinking. The researcher, then, extracted and analyzed the health monitoring data from the watch. According to the descriptive statistics analysis, it can be found that the heart rate change when drinking. The average heart rate is about 10% higher than normal, the coefficient of variation is less than about 30% of the normal state. Though more research is needed to be carried out, this experiment and analysis provide a thought of the application of the data from smart watches.

Keywords: character analysis, descriptive statistics analysis, drink state, heart rate, smart watch

Procedia PDF Downloads 155
24595 Desired Flow of Radioactive Materials from Logistics Service Quality Perspective

Authors: Tuğçe Yavaş Akış

Abstract:

In recent years, due to an increased use of radioactive materials, radioactive sources are constantly being transported via air, road and ocean ways for medical, industrial, research etc. purposes throughout the world. The quantity of radioactive materials transported all around the world varies from negligible quantities in shipments of consumer products to very large quantities in shipments of irradiated nuclear fuel. Radioactive materials have been less attractive for social science researchers in literature. In this study, it is aimed to discover desired flow of radioactive materials from logistics service quality (LSQ) perspective. In doing so, case study approach will be employed by using secondary data collected from one of the world’s leading transportation companies’ customer care system reports. Movement of radioactive cargoes containing IR-192 and logistics process will be analyzed with the help of logistics service quality dimensions. Based on the case study that will be conducted, interaction between dimensions, the importance of each dimension in desired flow, and their relevance with desired flow of radioactive materials will be explained. This study will bring out the desired flow of radioactive materials transportation and be a guide for all other companies, employees and researchers.

Keywords: logistics service quality, LSQ dimension , radioactive material, transportation

Procedia PDF Downloads 219
24594 An Approach to Practical Determination of Fair Premium Rates in Crop Hail Insurance Using Short-Term Insurance Data

Authors: Necati Içer

Abstract:

Crop-hail insurance plays a vital role in managing risks and reducing the financial consequences of hail damage on crop production. Predicting insurance premium rates with short-term data is a major difficulty in numerous nations because of the unique characteristics of hailstorms. This study aims to suggest a feasible approach for establishing equitable premium rates in crop-hail insurance for nations with short-term insurance data. The primary goal of the rate-making process is to determine premium rates for high and zero loss costs of villages and enhance their credibility. To do this, a technique was created using the author's practical knowledge of crop-hail insurance. With this approach, the rate-making method was developed using a range of temporal and spatial factor combinations with both hypothetical and real data, including extreme cases. This article aims to show how to incorporate the temporal and spatial elements into determining fair premium rates using short-term insurance data. The article ends with a suggestion on the ultimate premium rates for insurance contracts.

Keywords: crop-hail insurance, premium rate, short-term insurance data, spatial and temporal parameters

Procedia PDF Downloads 32
24593 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa

Authors: Samy A. Khalil, U. Ali Rahoma

Abstract:

The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.

Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa

Procedia PDF Downloads 86
24592 The Role of Named Entity Recognition for Information Extraction

Authors: Girma Yohannis Bade, Olga Kolesnikova, Grigori Sidorov

Abstract:

Named entity recognition (NER) is a building block for information extraction. Though the information extraction process has been automated using a variety of techniques to find and extract a piece of relevant information from unstructured documents, the discovery of targeted knowledge still poses a number of research difficulties because of the variability and lack of structure in Web data. NER, a subtask of information extraction (IE), came to exist to smooth such difficulty. It deals with finding the proper names (named entities), such as the name of the person, country, location, organization, dates, and event in a document, and categorizing them as predetermined labels, which is an initial step in IE tasks. This survey paper presents the roles and importance of NER to IE from the perspective of different algorithms and application area domains. Thus, this paper well summarizes how researchers implemented NER in particular application areas like finance, medicine, defense, business, food science, archeology, and so on. It also outlines the three types of sequence labeling algorithms for NER such as feature-based, neural network-based, and rule-based. Finally, the state-of-the-art and evaluation metrics of NER were presented.

Keywords: the role of NER, named entity recognition, information extraction, sequence labeling algorithms, named entity application area

Procedia PDF Downloads 65
24591 Algorithm Optimization to Sort in Parallel by Decreasing the Number of the Processors in SIMD (Single Instruction Multiple Data) Systems

Authors: Ali Hosseini

Abstract:

Paralleling is a mechanism to decrease the time necessary to execute the programs. Sorting is one of the important operations to be used in different systems in a way that the proper function of many algorithms and operations depend on sorted data. CRCW_SORT algorithm executes ‘N’ elements sorting in O(1) time on SIMD (Single Instruction Multiple Data) computers with n^2/2-n/2 number of processors. In this article having presented a mechanism by dividing the input string by the hinge element into two less strings the number of the processors to be used in sorting ‘N’ elements in O(1) time has decreased to n^2/8-n/4 in the best state; by this mechanism the best state is when the hinge element is the middle one and the worst state is when it is minimum. The findings from assessing the proposed algorithm by other methods on data collection and number of the processors indicate that the proposed algorithm uses less processors to sort during execution than other methods.

Keywords: CRCW, SIMD (Single Instruction Multiple Data) computers, parallel computers, number of the processors

Procedia PDF Downloads 295
24590 Increasing the System Availability of Data Centers by Using Virtualization Technologies

Authors: Chris Ewe, Naoum Jamous, Holger Schrödl

Abstract:

Like most entrepreneurs, data center operators pursue goals such as profit-maximization, improvement of the company’s reputation or basically to exist on the market. Part of those aims is to guarantee a given quality of service. Quality characteristics are specified in a contract called the service level agreement. Central part of this agreement is non-functional properties of an IT service. The system availability is one of the most important properties as it will be shown in this paper. To comply with availability requirements, data center operators can use virtualization technologies. A clear model to assess the effect of virtualization functions on the parts of a data center in relation to the system availability is still missing. This paper aims to introduce a basic model that shows these connections, and consider if the identified effects are positive or negative. Thus, this work also points out possible disadvantages of the technology. In consequence, the paper shows opportunities as well as risks of data center virtualization in relation to system availability.

Keywords: availability, cloud computing IT service, quality of service, service level agreement, virtualization

Procedia PDF Downloads 522
24589 Review of Different Machine Learning Algorithms

Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui

Abstract:

Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.

Keywords: Data Mining, Web Mining, classification, ML Algorithms

Procedia PDF Downloads 278
24588 Using Genetic Algorithms and Rough Set Based Fuzzy K-Modes to Improve Centroid Model Clustering Performance on Categorical Data

Authors: Rishabh Srivastav, Divyam Sharma

Abstract:

We propose an algorithm to cluster categorical data named as ‘Genetic algorithm initialized rough set based fuzzy K-Modes for categorical data’. We propose an amalgamation of the simple K-modes algorithm, the Rough and Fuzzy set based K-modes and the Genetic Algorithm to form a new algorithm,which we hypothesise, will provide better Centroid Model clustering results, than existing standard algorithms. In the proposed algorithm, the initialization and updation of modes is done by the use of genetic algorithms while the membership values are calculated using the rough set and fuzzy logic.

Keywords: categorical data, fuzzy logic, genetic algorithm, K modes clustering, rough sets

Procedia PDF Downloads 232
24587 Forecasting Amman Stock Market Data Using a Hybrid Method

Authors: Ahmad Awajan, Sadam Al Wadi

Abstract:

In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.

Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series

Procedia PDF Downloads 111
24586 Building Information Modeling-Based Information Exchange to Support Facilities Management Systems

Authors: Sandra T. Matarneh, Mark Danso-Amoako, Salam Al-Bizri, Mark Gaterell

Abstract:

Today’s facilities are ever more sophisticated and the need for available and reliable information for operation and maintenance activities is vital. The key challenge for facilities managers is to have real-time accurate and complete information to perform their day-to-day activities and to provide their senior management with accurate information for decision-making process. Currently, there are various technology platforms, data repositories, or database systems such as Computer-Aided Facility Management (CAFM) that are used for these purposes in different facilities. In most current practices, the data is extracted from paper construction documents and is re-entered manually in one of these computerized information systems. Construction Operations Building information exchange (COBie), is a non-proprietary data format that contains the asset non-geometric data which was captured and collected during the design and construction phases for owners and facility managers use. Recently software vendors developed add-in applications to generate COBie spreadsheet automatically. However, most of these add-in applications are capable of generating a limited amount of COBie data, in which considerable time is still required to enter the remaining data manually to complete the COBie spreadsheet. Some of the data which cannot be generated by these COBie add-ins is essential for facilities manager’s day-to-day activities such as job sheet which includes preventive maintenance schedules. To facilitate a seamless data transfer between BIM models and facilities management systems, we developed a framework that enables automated data generation using the data extracted directly from BIM models to external web database, and then enabling different stakeholders to access to the external web database to enter the required asset data directly to generate a rich COBie spreadsheet that contains most of the required asset data for efficient facilities management operations. The proposed framework is a part of ongoing research and will be demonstrated and validated on a typical university building. Moreover, the proposed framework supplements the existing body of knowledge in facilities management domain by providing a novel framework that facilitates seamless data transfer between BIM models and facilities management systems.

Keywords: building information modeling, BIM, facilities management systems, interoperability, information management

Procedia PDF Downloads 100
24585 An International Curriculum Development for Languages and Technology

Authors: Miguel Nino

Abstract:

When considering the challenges of a changing and demanding globalizing world, it is important to reflect on how university students will be prepared for the realities of internationalization, marketization and intercultural conversation. The present study is an interdisciplinary program designed to respond to the needs of the global community. The proposal bridges the humanities and science through three different fields: Languages, graphic design and computer science, specifically, fundamentals of programming such as python, java script and software animation. Therefore, the goal of the four year program is twofold: First, enable students for intercultural communication between English and other languages such as Spanish, Mandarin, French or German. Second, students will acquire knowledge in practical software and relevant employable skills to collaborate in assisted computer projects that most probable will require essential programing background in interpreted or compiled languages. In order to become inclusive and constructivist, the cognitive linguistics approach is suggested for the three different fields, particularly for languages that rely on the traditional method of repetition. This methodology will help students develop their creativity and encourage them to become independent problem solving individuals, as languages enhance their common ground of interaction for culture and technology. Participants in this course of study will be evaluated in their second language acquisition at the Intermediate-High level. For graphic design and computer science students will apply their creative digital skills, as well as their critical thinking skills learned from the cognitive linguistics approach, to collaborate on a group project design to find solutions for media web design problems or marketing experimentation for a company or the community. It is understood that it will be necessary to apply programming knowledge and skills to deliver the final product. In conclusion, the program equips students with linguistics knowledge and skills to be competent in intercultural communication, where English, the lingua franca, remains the medium for marketing and product delivery. In addition to their employability, students can expand their knowledge and skills in digital humanities, computational linguistics, or increase their portfolio in advertising and marketing. These students will be the global human capital for the competitive globalizing community.

Keywords: curriculum, international, languages, technology

Procedia PDF Downloads 432
24584 Students’ Perception of Effort and Emotional Costs in Chemistry Courses

Authors: Guizella Rocabado, Cassidy Wilkes

Abstract:

It is well known that chemistry is one of the most feared courses in college. Although many students enjoy learning about science, most of them perceive that chemistry is “too difficult”. These perceptions of chemistry result in many students not considering Science, Technology, Engineering, and Mathematics (STEM) majors because they require chemistry courses. Ultimately, these perceptions are also thought to be related to high attrition rates of students who begin STEM majors but do not persist. Students perceived costs of a chemistry class can be many, such as task effort, loss of valued alternatives, emotional, and others. These costs might be overcome by students’ interests and goals, yet the level of perceived costs might have a lasting impact on the students’ overall perception of chemistry and their desire to pursue chemistry and other STEM careers in the future. In this mixed methods study, we investigated task effort and emotional cost, as well as a mastery or performance goal orientation, and the impact these constructs may have on achievement in general chemistry classrooms. Utilizing cluster analysis as well as student interviews, we investigated students’ profiles of perceived cost and goal orientation as it relates to their final grades. Our results show that students who are well prepared for general chemistry, such as those who have taken chemistry in high school, display less negative perceived costs and thus believe they can master the material more fully. Other interesting results have also emerged from this research, which has the potential to have an impact on future instruction of these courses.

Keywords: chemistry education, motivation, affect, perceived costs, goal orientations

Procedia PDF Downloads 73
24583 Determinants of Integrated Reporting in Nigeria

Authors: Uwalomwa Uwuigbe, Olubukola Ranti Uwuigbe, Jinadu Olugbenga, Otekunrin Adegbola

Abstract:

Corporate reporting has evolved over the years resulting from criticisms of the precedent by shareholders, stakeholders and other relevant financial institutions. Integrated reporting has become a globalized corporate reporting style, with its adoption around the world occurring rapidly to bring about an improvement in the quality of corporate reporting. While some countries have swiftly clinched into reporting in an integrated manner, others have not. In addition, there are ample research that has been conducted on the benefits of adopting integrated reporting, however, the same is not true in developing economies like Nigeria. Hence, this study basically examined the factors determining the adoption of integrated reporting in Nigeria. One hundred (100) copies of questionnaire was administered to financial managers of 20 selected listed companies in the Nigeria stock exchange market. The data obtained was analysed using the Spearman Rank Order Correlation via the Statistical Package for Social Science. This study observed that there is a significant relationship between the social pressures of isomorphic changes and integrated reporting adoption in Nigeria. The study recommends the need for an enforcement mechanism to be put in place while considering the adoption of integrated reporting in Nigeria, enforcement mechanisms should put into consideration the investors demand, the level of economic development, and the degree of corporate social responsibility.

Keywords: corporate social responsibility, isomorphic, integrated reporting, Nigeria, sustainability

Procedia PDF Downloads 376
24582 Urinalysis by Surface-Enhanced Raman Spectroscopy on Gold Nanoparticles for Different Disease

Authors: Leonardo C. Pacheco-Londoño, Nataly J. Galan-Freyle, Lisandro Pacheco-Lugo, Antonio Acosta, Elkin Navarro, Gustavo Aroca-Martínez, Karin Rondón-Payares, Samuel P. Hernández-Rivera

Abstract:

In our Life Science Research Center of the University Simon Bolivar (LSRC), one of the focuses is the diagnosis and prognosis of different diseases; we have been implementing the use of gold nanoparticles (Au-NPs) for various biomedical applications. In this case, Au-NPs were used for Surface-Enhanced Raman Spectroscopy (SERS) in different diseases' diagnostics, such as Lupus Nephritis (LN), hypertension (H), preeclampsia (PC), and others. This methodology is proposed for the diagnosis of each disease. First, good signals of the different metabolites by SERS were obtained through a mixture of urine samples and Au-NPs. Second, PLS-DA models based on SERS spectra to discriminate each disease were able to differentiate between sick and healthy patients with different diseases. Finally, the sensibility and specificity for the different models were determined in the order of 0.9. On the other hand, a second methodology was developed using machine learning models from all data of the different diseases, and, as a result, a discriminant spectral map of the diseases was generated. These studies were possible thanks to joint research between two university research centers and two health sector entities, and the patient samples were treated with ethical rigor and their consent.

Keywords: SERS, Raman, PLS-DA, diseases

Procedia PDF Downloads 123
24581 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review

Authors: R. Christopher, C. Brandt, N. Damons

Abstract:

Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.

Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity

Procedia PDF Downloads 162
24580 Data Security and Privacy Challenges in Cloud Computing

Authors: Amir Rashid

Abstract:

Cloud Computing frameworks empower organizations to cut expenses by outsourcing computation resources on-request. As of now, customers of Cloud service providers have no methods for confirming the privacy and ownership of their information and data. To address this issue we propose the platform of a trusted cloud computing program (TCCP). TCCP empowers Infrastructure as a Service (IaaS) suppliers, for example, Amazon EC2 to give a shout box execution condition that ensures secret execution of visitor virtual machines. Also, it permits clients to bear witness to the IaaS supplier and decide if the administration is secure before they dispatch their virtual machines. This paper proposes a Trusted Cloud Computing Platform (TCCP) for guaranteeing the privacy and trustworthiness of computed data that are outsourced to IaaS service providers. The TCCP gives the deliberation of a shut box execution condition for a client's VM, ensuring that no cloud supplier's authorized manager can examine or mess up with its data. Furthermore, before launching the VM, the TCCP permits a client to dependably and remotely acknowledge that the provider at backend is running a confided in TCCP. This capacity extends the verification of whole administration, and hence permits a client to confirm the data operation in secure mode.

Keywords: cloud security, IaaS, cloud data privacy and integrity, hybrid cloud

Procedia PDF Downloads 285
24579 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 76
24578 Metal Ship and Robotic Car: A Hands-On Activity to Develop Scientific and Engineering Skills for High School Students

Authors: Jutharat Sunprasert, Ekapong Hirunsirisawat, Narongrit Waraporn, Somporn Peansukmanee

Abstract:

Metal Ship and Robotic Car is one of the hands-on activities in the course, the Fundamental of Engineering that can be divided into three parts. The first part, the metal ships, was made by using engineering drawings, physics and mathematics knowledge. The second part is where the students learned how to construct a robotic car and control it using computer programming. In the last part, the students had to combine the workings of these two objects in the final testing. This aim of study was to investigate the effectiveness of hands-on activity by integrating Science, Technology, Engineering and Mathematics (STEM) concepts to develop scientific and engineering skills. The results showed that the majority of students felt this hands-on activity lead to an increased confidence level in the integration of STEM. Moreover, 48% of all students engaged well with the STEM concepts. Students could obtain the knowledge of STEM through hands-on activities with the topics science and mathematics, engineering drawing, engineering workshop and computer programming; most students agree and strongly agree with this learning process. This indicated that the hands-on activity: “Metal Ship and Robotic Car” is a useful tool to integrate each aspect of STEM. Furthermore, hands-on activities positively influence a student’s interest which leads to increased learning achievement and also in developing scientific and engineering skills.

Keywords: hands-on activity, STEM education, computer programming, metal work

Procedia PDF Downloads 449
24577 A Proposal to Tackle Security Challenges of Distributed Systems in the Healthcare Sector

Authors: Ang Chia Hong, Julian Khoo Xubin, Burra Venkata Durga Kumar

Abstract:

Distributed systems offer many benefits to the healthcare industry. From big data analysis to business intelligence, the increased computational power and efficiency from distributed systems serve as an invaluable resource in the healthcare sector to utilize. However, as the usage of these distributed systems increases, many issues arise. The main focus of this paper will be on security issues. Many security issues stem from distributed systems in the healthcare industry, particularly information security. The data of people is especially sensitive in the healthcare industry. If important information gets leaked (Eg. IC, credit card number, address, etc.), a person’s identity, financial status, and safety might get compromised. This results in the responsible organization losing a lot of money in compensating these people and even more resources expended trying to fix the fault. Therefore, a framework for a blockchain-based healthcare data management system for healthcare was proposed. In this framework, the usage of a blockchain network is explored to store the encryption key of the patient’s data. As for the actual data, it is encrypted and its encrypted data, called ciphertext, is stored in a cloud storage platform. Furthermore, there are some issues that have to be emphasized and tackled for future improvements, such as a multi-user scheme that could be proposed, authentication issues that have to be tackled or migrating the backend processes into the blockchain network. Due to the nature of blockchain technology, the data will be tamper-proof, and its read-only function can only be accessed by authorized users such as doctors and nurses. This guarantees the confidentiality and immutability of the patient’s data.

Keywords: distributed, healthcare, efficiency, security, blockchain, confidentiality and immutability

Procedia PDF Downloads 166
24576 Design and Implementation of a Geodatabase and WebGIS

Authors: Sajid Ali, Dietrich Schröder

Abstract:

The merging of internet and Web has created many disciplines and Web GIS is one these disciplines which is effectively dealing with the geospatial data in a proficient way. Web GIS technologies have provided an easy accessing and sharing of geospatial data over the internet. However, there is a single platform for easy and multiple accesses of the data lacks for the European Caribbean Association (Europaische Karibische Gesselschaft - EKG) to assist their members and other research community. The technique presented in this paper deals with designing of a geodatabase using PostgreSQL/PostGIS as an object oriented relational database management system (ORDBMS) for competent dissemination and management of spatial data and Web GIS by using OpenGeo Suite for the fast sharing and distribution of the data over the internet. The characteristics of the required design for the geodatabase have been studied and a specific methodology is given for the purpose of designing the Web GIS. At the end, validation of this Web based geodatabase has been performed over two Desktop GIS software and a web map application and it is also discussed that the contribution has all the desired modules to expedite further research in the area as per the requirements.

Keywords: desktop GISSoftware, European Caribbean association, geodatabase, OpenGeo suite, postgreSQL/PostGIS, webGIS, web map application

Procedia PDF Downloads 323
24575 Evaluation and Control of Cracking for Bending Rein-forced One-way Concrete Voided Slab with Plastic Hollow Inserts

Authors: Mindaugas Zavalis

Abstract:

Analysis of experimental tests data of bending one-way reinforced concrete slabs from various articles of science revealed that voided slabs with a grid of hollow plastic inserts inside have smaller mechani-cal and physical parameters compared to continuous cross-section slabs (solid slabs). The negative influence of a reinforced concrete slab is impacted by hollow plastic inserts, which make a grid of voids in the middle of the cross-sectional area of the reinforced concrete slab. A formed grid of voids reduces the slab’s stiffness, which influences the slab’s parameters of serviceability, like deflection and cracking. Prima-ry investigation of data established during experiments illustrates that cracks occur faster in the tensile surface of the voided slab under bend-ing compared to bending solid slab. It means that the crack bending moment force for the voided slab is smaller than the solid slab and the reduction can variate in the range of 14 – 40 %. Reduce of resistance to cracking can be controlled by changing a lot of factors: the shape of the plastic hallow insert, plastic insert height, steps between plastic in-serts, usage of prestressed reinforcement, the diameter of reinforcement bar, slab effective depth, the bottom cover thickness of concrete, effec-tive cross-section of the concrete area about reinforcement and etc. Mentioned parameters are used to evaluate crack width and step of cracking, but existing analytical calculation methods for cracking eval-uation of voided slab with plastic inserts are not so exact and the re-sults of cracking evaluation in this paper are higher than the results of analyzed experiments. Therefore, it was made analytically calculations according to experimental bending tests of voided reinforced concrete slabs with hollow plastic inserts to find and propose corrections for the evaluation of cracking for reinforced concrete voided slabs with hollow plastic inserts.

Keywords: voided slab, cracking, hallow plastic insert, bending, one-way reinforced concrete, serviceability

Procedia PDF Downloads 59
24574 'Freud and Jung: Dissenting Friends'; An Analysis of the Foundations of the Psychoanalytical Theory in Theirs Letters

Authors: Laurence Doremus

Abstract:

Freud as the builder of psychoanalysis as a discipline had created the science with Carl Gustav Jung (1875-1961), a psychiatry specialist from Zurich who was very important in the Freudian theory. The knowledge about the foundation of psychoanalysis is often focused on the influence of the works from Breuer or Charcot in the Freudian praxis, at least at the beginning of his career, and Jung's influence is often under-estimated. The paper focuses on the importance of the Jungian contributions in Freud's theories at the beginning of the creation of the discipline in the 1910s. We often meet Jungian schools on the first hand and Freudian schools, on the other hand in the academic field, but the Freudian field has to admit the importance of the Jungian theories in Freudian science. And also, the dialectical energy which appears in the letters exchanged between both of the fathers of psychoanalysis is important to understand the foundations of Freud Theory. That's why the paper will analyze in detail the correspondence between them in an epistemological and historical approach. Effectively the letters were translated and published (in French but also in English and other languages) lately in history and are still not well knew by the researchers in the psychoanalytical field. We well explain how Freud was helped by Jung despite his desire to build the theory. We analyze how the second topic named 'unconscious, preconscious, and conscious', is the result of the first topic that Jung built with Freud. The paper is a contribution to the knowledge we should have about the intense friendship between the two protagonists.

Keywords: Carl Gustav Jung, correspondence, Freud's letters, psychoanalytic theory

Procedia PDF Downloads 137