Search results for: named data networking
24215 Chemical and Biological Studies of Kielmeyera coriacea Mart. (Calophyllaceae) Based on Ethnobotanical Survey of Rural Community from Brazil
Authors: Vanessa G. P. Severino, Eliangela Cristina Candida Costa, Nubia Alves Mariano Teixeira Pires Gomides, Lucilia Kato, Afif Felix Monteiro, Maria Anita Lemos Vasconcelos Ambrosio, Carlos Henrique Gomes Martins
Abstract:
One of the biomes present in Brazil is known as Cerrado, which is a vast tropical savanna ecoregion, particularly in the states of Goiás, Mato Grosso do Sul, Mato Grosso, Tocantins and Minas Gerais. Many species of plants are characterized as endemic and they have therapeutic value for a large part of the population, especially to the rural communities. Given that, the southeastern region of the state of Goiás contains about 21 rural communities, which present a form of organization based on the use of natural resources available. One of these rural communities is named of Coqueiros, where the knowledge about the medicinal plants was very important to this research. Thus, this study focuses on the ethnobotanical survey of this community on the use of Kielmeyera coriacea to treat diseases. From the 37 members interviewed, 76% indicated this species for the treatment of intestinal infection, leukemia, anemia, gastritis, gum pain, toothache, cavity, arthritis, arthrosis, healing, vermifuge, rheumatism, antibiotic, skin problems, mycoses and all kinds of infections. The medicinal properties attributed during the interviews were framed in the body system (disease categories), adapted from ICD 10; thus, 20 indications of use were obtained, among five body systems. Therefore, the root of this species was select to chemical and biological (antioxidant and antimicrobial) studies. From the liquid-liquid extraction of ethanolic extract of root (EER), the hexane (FH), ethyl acetate (FAE), and hydro alcoholic (FHA) fractions were obtained. The chemical profile study of these fractions was performed by LC-MS, identifying major compounds such as δ-tocotrienol, prenylated acylphoroglucinol, 2-hydroxy-1-methoxyxanthone and quercitrin. EER, FH, FAE and FHA were submitted to biological tests. FHA presented the best antioxidant action (EC50 201.53 μg mL-1). EER inhibited the bacterial growth of Streptococcus pyogenes and Pseudomonas aeruginosa, microorganisms associated with rheumatism, at Minimum Inhibitory Concentration (MIC) of 6.25 μg mL-1. In addition, the FH-10 subfraction, obtained from FH fractionation, presented MIC of 1.56 μg mL-1 against S. pneumoniae; EER also inhibited the fungus Candida glabrata (MIC 7.81 μg mL- 1). The FAE-4.7.3 fraction, from the fractionation of FAE, presented MIC of 200 μg mL-1 against Lactobacillus casei, which is one of the causes of caries and oral infections. By the correlation of the chemical and biological data, it is possible to note that the FAE-4.7.3 and FH-10 are constituted 4-hydroxy-2,3-methylenedioxy xanthone, 3-hydroxy-1,2-dimethoxy xanthone, lupeol, prenylated acylphoroglucinol and quercitrin, which could be associated with the biological potential found. Therefore, this study provides an important basis for further investigations regarding the compounds present in the active fractions of K. coriacea, which will permit the establishment of a correlation between ethnobotanical survey and bioactivity.Keywords: biological activity, ethnobotanical survey, Kielmeyera coriacea Mart., LC-MS profile
Procedia PDF Downloads 14124214 Developing a Research Culture in the Faculty of Engineering and Information Technology at the Central University of Technology, Free State: Implications for Knowledge Management
Authors: Mpho Agnes Mbeo, Patient Rambe
Abstract:
The thirteenth year of the Central University of Technology, Free State’s (CUT) transition from a vocational and professional training orientation institution (i.e. a technikon) into a university with a strong research focus has neither been a smooth nor an easy one. At the heart of this transition was the need to transform the psychological faculties of academic and research staffs compliment who were accustomed to training graduates for industrial placement. The lack of a culture of research that fully embraces a strong ethos of conducting world-class research needed to be addressed. The induction and socialisation of academic staff into the development and execution of cutting-edge research also required the provision of research support and the creation of a conducive academic environment for research, both for emerging and non-research active academics. Drawing on ten cases, comprising four heads of departments, three prolific established researchers, and three emerging researchers, this study explores the challenges faced in establishing a strong research culture at the university. Furthermore, it gives an account of the extent to which the current research interventions have addressed the perceivably “missing research culture”, and the implications of these interventions for knowledge management. Evidence suggests that the endowment of an ideal institutional research environment (comprising strong internet networks, persistent connectivity on and off campus), research peer mentorship, and growing publication outputs should be matched by a coherent research incentive culture and strong research leadership. This is critical to building new knowledge and entrenching knowledge management founded on communities of practice and scholarly networking through the documentation and communication of research findings. The study concludes that the multiple policy documents set for the different domains of research may be creating pressure on researchers to engage research activities and increase output at the expense of research quality.Keywords: Central University of Technology, performance, publication, research culture, university
Procedia PDF Downloads 17324213 Reliable Consensus Problem for Multi-Agent Systems with Sampled-Data
Authors: S. H. Lee, M. J. Park, O. M. Kwon
Abstract:
In this paper, reliable consensus of multi-agent systems with sampled-data is investigated. By using a suitable Lyapunov-Krasovskii functional and some techniques such as Wirtinger Inequality, Schur Complement and Kronecker Product, the results of this systems are obtained by solving a set of Linear Matrix Inequalities(LMIs). One numerical example is included to show the effectiveness of the proposed criteria.Keywords: multi-agent, linear matrix inequalities (LMIs), kronecker product, sampled-data, Lyapunov method
Procedia PDF Downloads 52824212 Materialized View Effect on Query Performance
Authors: Yusuf Ziya Ayık, Ferhat Kahveci
Abstract:
Currently, database management systems have various tools such as backup and maintenance, and also provide statistical information such as resource usage and security. In terms of query performance, this paper covers query optimization, views, indexed tables, pre-computation materialized view, query performance analysis in which query plan alternatives can be created and the least costly one selected to optimize a query. Indexes and views can be created for related table columns. The literature review of this study showed that, in the course of time, despite the growing capabilities of the database management system, only database administrators are aware of the need for dealing with archival and transactional data types differently. These data may be constantly changing data used in everyday life, and also may be from the completed questionnaire whose data input was completed. For both types of data, the database uses its capabilities; but as shown in the findings section, instead of repeating similar heavy calculations which are carrying out same results with the same query over a survey results, using materialized view results can be in a more simple way. In this study, this performance difference was observed quantitatively considering the cost of the query.Keywords: cost of query, database management systems, materialized view, query performance
Procedia PDF Downloads 28024211 Development of Highly Repellent Silica Nanoparticles Treatment for Protection of Bio-Based Insulation Composite Material
Authors: Nadia Sid, Alan Taylor, Marion Bourebrab
Abstract:
The construction sector is on the critical path to decarbonise the European economy by 2050. In order to achieve this objective it must enable reducing its CO2 emission by 90% and its energy consumption by as much as 50%. For this reason, a new class of low environmental impact construction materials named “eco-material” are becoming increasingly important in the struggle against climate change. A European funded collaborative project ISOBIO coordinated by TWI is aimed at taking a radical approach to the use of bio-based aggregates to create novel construction materials that are usable in high volume in using traditional methods, as well as developing markets such as exterior insulation of existing house stocks. The approach taken for this project is to use finely chopped material protected from bio-degradation through the use of functionalized silica nanoparticles. TWI is exploring the development of novel inorganic-organic hybrid nano-materials, to be applied as a surface treatment onto bio-based aggregates. These nanoparticles are synthesized by sol-gel processing and then functionalised with silanes to impart multifunctionality e.g. hydrophobicity, fire resistance and chemical bonding between the silica nanoparticles and the bio-based aggregates. This talk will illustrate the approach taken by TWI to design the functionalized silica nanoparticles by using a material-by-design approach. The formulation and synthesize process will be presented together with the challenges addressed by those hybrid nano-materials. The results obtained with regards to the water repellence and fire resistance will be displayed together with preliminary public results of the ISOBIO project. (This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 641927).Keywords: bio-sourced material, composite material, durable insulation panel, water repellent material
Procedia PDF Downloads 23724210 An AK-Chart for the Non-Normal Data
Authors: Chia-Hau Liu, Tai-Yue Wang
Abstract:
Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.Keywords: multivariate control chart, statistical process control, one-class classification method, non-normal data
Procedia PDF Downloads 42324209 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation
Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves
Abstract:
Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP
Procedia PDF Downloads 9924208 Panel Application for Determining Impact of Real Exchange Rate and Security on Tourism Revenues: Countries with Middle and High Level Tourism Income
Authors: M. Koray Cetin, Mehmet Mert
Abstract:
The purpose of the study is to examine impacts on tourism revenues of the exchange rate and country overall security level. There are numerous studies that examine the bidirectional relation between macroeconomic factors and tourism revenues and tourism demand. Most of the studies support the existence of impact of tourism revenues on growth rate but not vice versa. Few studies examine the impact of factors like real exchange rate or purchasing power parity on the tourism revenues. In this context, firstly impact of real exchange rate on tourism revenues examination is aimed. Because exchange rate is one of the main determinants of international tourism services price in guests currency unit. Another determinant of tourism demand for a country is country’s overall security level. This issue can be handled in the context of the relationship between tourism revenues and overall security including turmoil, terrorism, border problem, political violence. In this study, factors are handled for several countries which have tourism revenues on a certain level. With this structure, it is a panel data, and it is evaluated with panel data analysis techniques. Panel data have at least two dimensions, and one of them is time dimensions. The panel data analysis techniques are applied to data gathered from Worldbank data web page. In this study, it is expected to find impacts of real exchange rate and security factors on tourism revenues for the countries that have noteworthy tourism revenues.Keywords: exchange rate, panel data analysis, security, tourism revenues
Procedia PDF Downloads 35124207 The Effect of General Data Protection Regulation on South Asian Data Protection Laws
Authors: Sumedha Ganjoo, Santosh Goswami
Abstract:
The rising reliance on technology places national security at the forefront of 21st-century issues. It complicates the efforts of emerging and developed countries to combat cyber threats and increases the inherent risk factors connected with technology. The inability to preserve data securely might have devastating repercussions on a massive scale. Consequently, it is vital to establish national, regional, and global data protection rules and regulations that penalise individuals who participate in immoral technology usage and exploit the inherent vulnerabilities of technology. This study paper seeks to analyse GDPR-inspired Bills in the South Asian Region and determine their suitability for the development of a worldwide data protection framework, considering that Asian countries are much more diversified than European ones. In light of this context, the objectives of this paper are to identify GDPR-inspired Bills in the South Asian Region, identify their similarities and differences, as well as the obstacles to developing a regional-level data protection mechanism, thereby satisfying the need to develop a global-level mechanism. Due to the qualitative character of this study, the researcher did a comprehensive literature review of prior research papers, journal articles, survey reports, and government publications on the aforementioned topics. Taking into consideration the survey results, the researcher conducted a critical analysis of the significant parameters highlighted in the literature study. Many nations in the South Asian area are in the process of revising their present data protection measures in accordance with GDPR, according to the primary results of this study. Consideration is given to the data protection laws of Thailand, Malaysia, China, and Japan. Significant parallels and differences in comparison to GDPR have been discussed in detail. The conclusion of the research analyses the development of various data protection legislation regimes in South Asia.Keywords: data privacy, GDPR, Asia, data protection laws
Procedia PDF Downloads 8224206 Preparation of Zinc Oxide Nanoparticles and Its Anti-diabetic Effect with Momordica Charantia Plant Extract in Diabetic Mice
Authors: Zahid Hussain, Nayyab Sultan
Abstract:
This study describes the preparation of zinc oxide nanoparticles and their anti-diabetic effect individually and with the combination of Momordica charantia plant extract. This plant is termed bitter melon, balsam pear, bitter gourd, or karela. Blood glucose levels in mice were monitored in their random state before and after the administration of zinc oxide nanoparticles and plant extract. The powdered form of nanoparticles and the selected plant were used as an oral treatment. Diabetes was induced in mice by using a chemical named as streptozotocin. It is an artificial diabetes-inducing chemical. In the case of zinc oxide nanoparticles (3mg/kg) and Momordica charantia plant extract (500mg/kg); the maximum anti-diabetic effect observed was 70% ± 1.6 and 75% ± 1.3, respectively. In the case of the combination of zinc oxide nanoparticles (3mg/kg) and Momordica charantia plant extract (500mg/kg), the maximum anti-diabetic effect observed was 86% ± 2.0. The results obtained were more effective as compared to standard drugs Amaryl (3mg/kg), having an effectiveness of 52% ± 2.4, and Glucophage (500mg/kg), having an effectiveness of 29% ± 2.1. Results indicate that zinc oxide nanoparticles and plant extract in combination are more helpful in treating diabetes as compared to their individual treatments. It is considered a natural treatment without any side effects rather than using standard drugs, which shows adverse side effects on health, and most probably detoxifies in liver and kidneys. More experimental work and extensive research procedures are still required in order to make them applicable to pharmaceutical industries.Keywords: albino mice, amaryl, anti-diabetic effect, blood glucose level, Camellia sinensis, diabetes mellitus, Momordica charantia plant extract, streptozotocin, zinc oxide nanoparticles
Procedia PDF Downloads 11324205 Longitudinal Analysis of Internet Speed Data in the Gulf Cooperation Council Region
Authors: Musab Isah
Abstract:
This paper presents a longitudinal analysis of Internet speed data in the Gulf Cooperation Council (GCC) region, focusing on the most populous cities of each of the six countries – Riyadh, Saudi Arabia; Dubai, UAE; Kuwait City, Kuwait; Doha, Qatar; Manama, Bahrain; and Muscat, Oman. The study utilizes data collected from the Measurement Lab (M-Lab) infrastructure over a five-year period from January 1, 2019, to December 31, 2023. The analysis includes downstream and upstream throughput data for the cities, covering significant events such as the launch of 5G networks in 2019, COVID-19-induced lockdowns in 2020 and 2021, and the subsequent recovery period and return to normalcy. The results showcase substantial increases in Internet speeds across the cities, highlighting improvements in both download and upload throughput over the years. All the GCC countries have achieved above-average Internet speeds that can conveniently support various online activities and applications with excellent user experience.Keywords: internet data science, internet performance measurement, throughput analysis, internet speed, measurement lab, network diagnostic tool
Procedia PDF Downloads 6224204 A Web Service Based Sensor Data Management System
Authors: Rose A. Yemson, Ping Jiang, Oyedeji L. Inumoh
Abstract:
The deployment of wireless sensor network has rapidly increased, however with the increased capacity and diversity of sensors, and applications ranging from biological, environmental, military etc. generates tremendous volume of data’s where more attention is placed on the distributed sensing and little on how to manage, analyze, retrieve and understand the data generated. This makes it more quite difficult to process live sensor data, run concurrent control and update because sensor data are either heavyweight, complex, and slow. This work will focus on developing a web service platform for automatic detection of sensors, acquisition of sensor data, storage of sensor data into a database, processing of sensor data using reconfigurable software components. This work will also create a web service based sensor data management system to monitor physical movement of an individual wearing wireless network sensor technology (SunSPOT). The sensor will detect movement of that individual by sensing the acceleration in the direction of X, Y and Z axes accordingly and then send the sensed reading to a database that will be interfaced with an internet platform. The collected sensed data will determine the posture of the person such as standing, sitting and lying down. The system is designed using the Unified Modeling Language (UML) and implemented using Java, JavaScript, html and MySQL. This system allows real time monitoring an individual closely and obtain their physical activity details without been physically presence for in-situ measurement which enables you to work remotely instead of the time consuming check of an individual. These details can help in evaluating an individual’s physical activity and generate feedback on medication. It can also help in keeping track of any mandatory physical activities required to be done by the individuals. These evaluations and feedback can help in maintaining a better health status of the individual and providing improved health care.Keywords: HTML, java, javascript, MySQL, sunspot, UML, web-based, wireless network sensor
Procedia PDF Downloads 21224203 The Impact of Artificial Intelligence on Student’s Behavior and Mind
Authors: Makarios Mosaad Thabet Ibrahim
Abstract:
the existing context paper targets to give the important position of ‘scholar voice’ and the track trainer inside the study room, which contributes to greater scholar-focused song training. The goal is to consciousness at the capabilities of the scholar voice via the tune spectrum, which has been born in the music school room, and the instructor’s methodologies and techniques used within the song classroom. The tune curriculum, the principles of pupil-centered song schooling, and the function of students and teachers as tune ambassadors have been taken into consideration the essential song parameters of scholar voice. The scholar- voice is a well worth-mentioning factor of a scholar-focused training, and all instructors have to take into account and sell its life in their lecture room. student affairs services play a critical function in contributing to the wholistic development and success of college students as they progress through their educational careers. The examine incorporates a multifaceted examination of student affairs carrier offerings among 10 personal and three public Baghdad universities. scholar affairs administrators (thirteen) have been surveyed together with over 300 students to determine university-subsidized services and pupil pride and attention. The pupil affairs service studies findings various drastically among non-public and public establishments and people that observed a country wide and international curriculum. Universities need to persist to conform to changing demographics and technological improvements to enhance students' private and academic successes, and pupil affairs services are key to preparing graduates to thrive in a diverse international world.Keywords: college student-athletes, self-concept, use of social media training, social networking student affairs, student success, higher education, Iraq, universities, Baghdad student's voice, student-centered education, music ambassadors, music teachers
Procedia PDF Downloads 3324202 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 7624201 The Nexus of Federalism and Economic Development: A Politico-Economic Analysis of Balochistan, Pakistan
Authors: Rameesha Javaid
Abstract:
Balochistan, the largest landmass named after and dominated by the 55% Baloch population, which has had a difficult anti-center history like their brothers the Kurds of Middle East, reluctantly acceded to Pakistan in 1947. The region, which attained the status of a province after two decades of accession, has lagged behind in social development and economic growth as compared to the other three federating units. The province has seen the least financial autonomy and administrative decentralization both in autocratic and democratic dispensations under geostrategic and security considerations. Significant corrections have been recently made in the policy framework through changing the formula for intra-provincial National Finance Award, curtailing the number of subjects under federal control, and reactivating the Council of Common Interests. Yet policymaking remains overwhelmingly bureaucratic under a weak parliamentary oversight. The provincial coalition governments are unwieldy and directionless. The government machinery has much less than the optimal capability, character, integrity, will, and opportunity to perform. Decentralization further loses its semblance in the absence of local governments for long intervals and with the hold of hereditary tribal chiefs. Increased allocations failed to make an impact in the highest per capita cost environment due to long distances and scattered settlements. Decentralization, the basic ingredient of federalism has remained mortgaged to geo-strategic factors, internal security perceptions, autocratic and individualistic styles of governments, bureaucratic policymaking structures, bad governance, non-existent local governments, and feudalistic tribal lords. This suboptimal federalism speaks for the present underdevelopment in Balochistan and will earmark the milestones in the future.Keywords: Balochistan, economic development, federalism, political economy
Procedia PDF Downloads 31024200 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 31624199 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 10824198 Foundation of the Information Model for Connected-Cars
Authors: Hae-Won Seo, Yong-Gu Lee
Abstract:
Recent progress in the next generation of automobile technology is geared towards incorporating information technology into cars. Collectively called smart cars are bringing intelligence to cars that provides comfort, convenience and safety. A branch of smart cars is connected-car system. The key concept in connected-cars is the sharing of driving information among cars through decentralized manner enabling collective intelligence. This paper proposes a foundation of the information model that is necessary to define the driving information for smart-cars. Road conditions are modeled through a unique data structure that unambiguously represent the time variant traffics in the streets. Additionally, the modeled data structure is exemplified in a navigational scenario and usage using UML. Optimal driving route searching is also discussed using the proposed data structure in a dynamically changing road conditions.Keywords: connected-car, data modeling, route planning, navigation system
Procedia PDF Downloads 37424197 A Mainstream Aesthetic for African American Female Filmmakers
Authors: Tracy L. F. Worley
Abstract:
This presentation explores the environment that has limited leadership opportunities for Black women in cinema and advocates for autonomy among Black women filmmakers that is facilitated by strong internal and external networks and cooperative opportunities. Early images of African Americans in motion pictures were often conceptualized from the viewpoint of a White male director and depicted by White actors. The black film evolved in opposition to this context, leading to a Black film aesthetic. The oppositional context created in response to racist, misogynistic, and sexist representations in motion pictures sets the tone for female filmmakers of every hue – but especially for African American women. For them, the context of a male gaze, and for all intents and purposes, a White male gaze, forces them to create their own aesthetic. Theoretically, men and women, filmmakers and spectators have different perspectives across race, ethnicity, and gender. Two feminist theorists, bell hooks and Mary Ann Doane, suggest that female filmmakers are perceived as disparate from male filmmakers and that women, in general, are defined by what men see. Mary Ann Doane, a White feminist film theorist, has focused extensively on female spectatorship and women (White) in general as the object of the male gaze. Her discussion of the female body, male perception of it, and feminism in the motion picture industry support the suggestion that comprehending the organization and composition of Hollywood is critical to understanding women’s roles in the industry. Although much of her research addresses the silent film era and women’s roles then, Doane suggests that across cinematic periods, the theory assigned to “cinematic apparatus” is formulated within a context of sexuality. Men and women are viewed and treated differently in cinema (in front of and behind the camera), with women’s attractiveness and allure photographed specifically for the benefit of the “spectatorial desire” of the male gaze. Bell Hooks, an African American feminist writer and theorist with more than 30 published books and articles on race, gender, class, and culture in feminism and education, suggests that women can overcome the male gaze by using their “oppositional gaze” to transform reality and establish their own truth. She addresses gender within the context of race by acknowledging the realities faced by African American women and the fact that the feminist movement was never intended to include Black women. A grounded theory study led to the development of a leadership theory that explains why African American women are disproportionately represented in a mainstream motion picture leadership. The study helped to reveal the barriers to entry and illuminated potential strategies that African American female motion picture directors might pursue to reduce this inequity. Using semi-structured interviews as the primary means for data collection, the lived experiences of African American female directors and organizational leadership’s perceived role in the perpetuation of negative female imagery in major motion pictures led to the identification of support strategies for African American female motion picture directors that counter social stereotyping and validate the need for social networking in the mainstream.Keywords: African American, cinema, directors, filmmaking, leadership, women
Procedia PDF Downloads 6624196 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data
Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad
Abstract:
Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.Keywords: remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction
Procedia PDF Downloads 34024195 Automated Multisensory Data Collection System for Continuous Monitoring of Refrigerating Appliances Recycling Plants
Authors: Georgii Emelianov, Mikhail Polikarpov, Fabian Hübner, Jochen Deuse, Jochen Schiemann
Abstract:
Recycling refrigerating appliances plays a major role in protecting the Earth's atmosphere from ozone depletion and emissions of greenhouse gases. The performance of refrigerator recycling plants in terms of material retention is the subject of strict environmental certifications and is reviewed periodically through specialized audits. The continuous collection of Refrigerator data required for the input-output analysis is still mostly manual, error-prone, and not digitalized. In this paper, we propose an automated data collection system for recycling plants in order to deduce expected material contents in individual end-of-life refrigerating appliances. The system utilizes laser scanner measurements and optical data to extract attributes of individual refrigerators by applying transfer learning with pre-trained vision models and optical character recognition. Based on Recognized features, the system automatically provides material categories and target values of contained material masses, especially foaming and cooling agents. The presented data collection system paves the way for continuous performance monitoring and efficient control of refrigerator recycling plants.Keywords: automation, data collection, performance monitoring, recycling, refrigerators
Procedia PDF Downloads 16424194 Sales Patterns Clustering Analysis on Seasonal Product Sales Data
Authors: Soojin Kim, Jiwon Yang, Sungzoon Cho
Abstract:
As a seasonal product is only in demand for a short time, inventory management is critical to profits. Both markdowns and stockouts decrease the return on perishable products; therefore, researchers have been interested in the distribution of seasonal products with the aim of maximizing profits. In this study, we propose a data-driven seasonal product sales pattern analysis method for individual retail outlets based on observed sales data clustering; the proposed method helps in determining distribution strategies.Keywords: clustering, distribution, sales pattern, seasonal product
Procedia PDF Downloads 59524193 A Low Cost Education Proposal Using Strain Gauges and Arduino to Develop a Balance
Authors: Thais Cavalheri Santos, Pedro Jose Gabriel Ferreira, Alexandre Daliberto Frugoli, Lucio Leonardo, Pedro Americo Frugoli
Abstract:
This paper presents a low cost education proposal to be used in engineering courses. The engineering education in universities of a developing country that is in need of an increasing number of engineers carried out with quality and affordably, pose a difficult problem to solve. In Brazil, the political and economic scenario requires academic managers able to reduce costs without compromising the quality of education. Within this context, the elaboration of a physics principles teaching method with the construction of an electronic balance is proposed. First, a method to develop and construct a load cell through which the students can understand the physical principle of strain gauges and bridge circuit will be proposed. The load cell structure was made with aluminum 6351T6, in dimensions of 80 mm x 13 mm x 13 mm and for its instrumentation, a complete Wheatstone Bridge was assembled with strain gauges of 350 ohms. Additionally, the process involves the use of a software tool to document the prototypes (design circuits), the conditioning of the signal, a microcontroller, C language programming as well as the development of the prototype. The project also intends to use an open-source I/O board (Arduino Microcontroller). To design the circuit, the Fritizing software will be used and, to program the controller, an open-source software named IDE®. A load cell was chosen because strain gauges have accuracy and their use has several applications in the industry. A prototype was developed for this study, and it confirmed the affordability of this educational idea. Furthermore, the goal of this proposal is to motivate the students to understand the several possible applications in high technology of the use of load cells and microcontroller.Keywords: Arduino, load cell, low-cost education, strain gauge
Procedia PDF Downloads 30324192 Probability Sampling in Matched Case-Control Study in Drug Abuse
Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell
Abstract:
Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling
Procedia PDF Downloads 49324191 Preventive Effect of Three Kinds of Bacteriophages to Control Vibrio coralliilyticus Infection in Oyster Larvae
Authors: Hyoun Joong Kim, Jin Woo Jun, Sib Sankar Giri, Cheng Chi, Saekil Yun, Sang Guen Kim, Sang Wha Kim, Jeong Woo Kang, Se Jin Han, Se Chang Park
Abstract:
Vibrio corallilyticus is a well-known pathogen of coral. It is also infectious to a variety of shellfish species, including Pacific oyster (Crassostrea gigas) larvae. V. corallilyticus is remained to be a major constraint in marine bivalve aquaculture practice, especially in artificial seed production facility. Owing to the high mortality and contagious nature of the pathogen, large amount of antibiotics has been used for disease prevention and control. However, indiscriminate use of antibiotics may result in food and environmental pollution, and development of antibiotic resistant strains. Therefore, eco-friendly disease preventative measures are imperative for sustainable bivalve culture. The present investigation proposes the application of bacteriophage (phage) as an effective alternative method for controlling V. corallilyticus infection in marine bivalve hatcheries. Isolation of phages from sea water sample was carried out using drop or double layer agar methods. The host range, stability and morphology of the phage isolates were studied. In vivo phage efficacy to prevent V. corallilyticus infection in oyster larvae was also performed. The isolated phages, named pVco-5 and pVco-7 was classified as a podoviridae and pVco-14, was classified as a siphoviridae. Each phages were infective to four strains of seven V. corallilyticus strains tested. When oyster larvae were pre-treated with the phage before bacterial challenge, mortality of the treated oyster larvae was lower than that in the untreated control. This result suggests that each phages have the potential to be used as therapeutic agent for controlling V. corallilyticus infection in marine bivalve hatchery.Keywords: bacteriophage, Vibrio coralliilyticus, Oyster larvae, mortality
Procedia PDF Downloads 22424190 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 36424189 Evaluating the Effectiveness of Science Teacher Training Programme in National Colleges of Education: a Preliminary Study, Perceptions of Prospective Teachers
Authors: A. S. V Polgampala, F. Huang
Abstract:
This is an overview of what is entailed in an evaluation and issues to be aware of when class observation is being done. This study examined the effects of evaluating teaching practice of a 7-day ‘block teaching’ session in a pre -service science teacher training program at a reputed National College of Education in Sri Lanka. Effects were assessed in three areas: evaluation of the training process, evaluation of the training impact, and evaluation of the training procedure. Data for this study were collected by class observation of 18 teachers during 9th February to 16th of 2017. Prospective teachers of science teaching, the participants of the study were evaluated based on newly introduced format by the NIE. The data collected was analyzed qualitatively using the Miles and Huberman procedure for analyzing qualitative data: data reduction, data display and conclusion drawing/verification. It was observed that the trainees showed their confidence in teaching those competencies and skills. Teacher educators’ dissatisfaction has been a great impact on evaluation process.Keywords: evaluation, perceptions & perspectives, pre-service, science teachering
Procedia PDF Downloads 31524188 Detecting Venomous Files in IDS Using an Approach Based on Data Mining Algorithm
Authors: Sukhleen Kaur
Abstract:
In security groundwork, Intrusion Detection System (IDS) has become an important component. The IDS has received increasing attention in recent years. IDS is one of the effective way to detect different kinds of attacks and malicious codes in a network and help us to secure the network. Data mining techniques can be implemented to IDS, which analyses the large amount of data and gives better results. Data mining can contribute to improving intrusion detection by adding a level of focus to anomaly detection. So far the study has been carried out on finding the attacks but this paper detects the malicious files. Some intruders do not attack directly, but they hide some harmful code inside the files or may corrupt those file and attack the system. These files are detected according to some defined parameters which will form two lists of files as normal files and harmful files. After that data mining will be performed. In this paper a hybrid classifier has been used via Naive Bayes and Ripper classification methods. The results show how the uploaded file in the database will be tested against the parameters and then it is characterised as either normal or harmful file and after that the mining is performed. Moreover, when a user tries to mine on harmful file it will generate an exception that mining cannot be made on corrupted or harmful files.Keywords: data mining, association, classification, clustering, decision tree, intrusion detection system, misuse detection, anomaly detection, naive Bayes, ripper
Procedia PDF Downloads 41424187 Generalized Approach to Linear Data Transformation
Authors: Abhijith Asok
Abstract:
This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.Keywords: data transformation, dummy dimension, linear transformation, scaling
Procedia PDF Downloads 29724186 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health
Authors: Minna Pikkarainen, Yueqiang Xu
Abstract:
The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.Keywords: blockchain, health data, platform, action design
Procedia PDF Downloads 100