Search results for: R data science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26133

Search results for: R data science

24453 Intrusion Detection System Using Linear Discriminant Analysis

Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou

Abstract:

Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.

Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99

Procedia PDF Downloads 216
24452 From Waste to Wealth: A Future Paradigm for Plastic Management Using Blockchain Technology

Authors: Jim Shi, Jasmine Chang, Nesreen El-Rayes

Abstract:

The world has been experiencing a steadily increasing trend in both the production and consumption of plastic. The global consumer revolution should not have been possible without plastic, thanks to its salient feature of inexpensiveness and durability. But, as a two-edged sword, its durable quality has returned to haunt and even jeopardized us. That exacerbating the plastic crisis has attracted various global initiatives and actions. Simultaneously, firms are eager to adopt new technology as they witness and perceive more potential and merit of Industry 4.0 technologies. For example, Blockchain technology (BCT) is drawing the attention of numerous stakeholders because of its wide range of outstanding features that promise to enhance supply chain operations. However, from a research perspective, most of the literature addresses the plastic crisis from either environmental or social perspectives. In contrast, analysis from the data science perspective and technology is relatively scarce. To this end, this study aims to fill this gap and cover the plastic crisis from a holistic view of environmental, social, technological, and business perspectives. In particular, we propose a mathematical model to examine the inclusion of BCT to enhance and improve the efficiency on the upstream and the downstream sides of the plastic value, where the whole value chain is coordinated systematically, and its interoperability can be optimized. Consequently, the Environmental, Social, and Governance (ESG) goal and Circular Economics (CE) sustainability can be maximized.

Keywords: blockchain technology, plastic, circular economy, sustainability

Procedia PDF Downloads 66
24451 Fusing Mentorship, Leadership and Empowerment Among Young Women In STEM

Authors: Anne Bubriski

Abstract:

Despite improvements in gender inequalities, women and girls continue to face glass ceilings, underrepresentation, and harmful stereotypes that can limit their aspirations and opportunities in STEM. While girls are taking similar high school math and science classes, boys are more likely to take physics and six times more likely to take an engineering course. The gap becomes even larger for minority or low-income girls. This gender gap is not due to biology; rather, it is due to cultural, social, and institutional forces. As girls get older, these forces often ‘teach’ them ‘STEM is more for boys’. The STEM gender gap widens in college, with only 20% of engineering degrees being awarded to women, and by the time women enter the workforce, they only occupy about 13% of engineering jobs. At the University of Central Florida, the Women’s and Gender Studies Program has developed a unique mentoring program to address these issues, Science Leadership and Mentoring (SLAM). What is unique about the approach of SLAM is that we look to address this problem through leadership and STEM. We look to help girls make connections between leadership and STEM—that young women can be leaders as scientists and that scientists are leaders making a change. This is particularly needed and relevant to our community because while there are mentoring programs to our knowledge, SLAM is one of the only, if not only, mentoring programs pairing college women and 7th-grade girls that includes a focus both on STEM and leadership in the United States. SLAM is a curriculum-based mentoring program pairing one 7th-grade girl with one UCF undergraduate STEM major. SLAM empowers young women to be assertive, brave, confident, independent, inquisitive and proud leaders in STEM. SLAM seeks to promote young women’s inspiration and excitement into STEM fields and careers while also building leadership abilities such as problem-solving, teamwork and cooperation, cultural identity and ethnic pride, advocacy for positive change, and goals for the future. SLAM serves about fifteen 7th-grade girls for the academic year and about 20 UCF students. SLAM holds weekly mentoring meetings lasting about 90 minutes, covering topics on leadership, STEM majors and careers, and STEM leadership. This past year, SLAM received a Community Action Grant from the American Association of University Women (AAUW) to run a sub-program, SLAM-Space. SLAM-Space focused on exposing SLAM participants to aerospace engineering and other space-related STEM fields, such as physics and astronomy, through guest speakers, workshops and field trips, including the Kenndy Space Center. The proposed paper presentation will present an overview of SLAM-Space and the data findings from pre and post-surveys, in-depth interviews and focus groups from the SLAM participants' experiences in the program.

Keywords: gender, leadership, STEM, empowerment

Procedia PDF Downloads 29
24450 Environmental Variables as Determinants of Students Achievement in Biology Secondary Schools in South West Nigeria

Authors: Ayeni Margaret Foluso, K. A. Omotayo

Abstract:

This study investigated the impact of selected environmental variables as determinants of students’ achievements in biology in secondary schools. The selected environmental variables are class size and laboratory adequacy. The purpose was to find out whether these environmental variables can bring about improvement in the learning of biology by Senior Secondary School Students. The study design used was descriptive research of the survey type. Two instruments were used that is, Biology Achievement Test and School Environment Questionnaire .The population of the study consisted of all Biology students in both public and private Senior Secondary Schools class III (SSIII) in all the three selected states in South West Nigeria. A sample of 900 Biology students and 45 Biology Teachers from both public and private Senior Secondary Schools Class III were used. Two research hypotheses were generated for the study. The data collected were subjected to both descriptive statistics of mean and standard deviation; and the inferential statistics of regression Analyses was employed to test the hypotheses formulated. From the results, it was revealed that the selected environmental variables had influence on the students’ achievement in biology.

Keywords: environmental variables, determinants, students’ achievement, school science

Procedia PDF Downloads 468
24449 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —

Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno

Abstract:

STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.

Keywords: rule induction, decision table, missing data, noise

Procedia PDF Downloads 383
24448 From Equations to Structures: Linking Abstract Algebra and High-School Algebra for Secondary School Teachers

Authors: J. Shamash

Abstract:

The high-school curriculum in algebra deals mainly with the solution of different types of equations. However, modern algebra has a completely different viewpoint and is concerned with algebraic structures and operations. A question then arises: What might be the relevance and contribution of an abstract algebra course for developing expertise and mathematical perspective in secondary school mathematics instruction? This is the focus of this paper. The course Algebra: From Equations to Structures is a carefully designed abstract algebra course for Israeli secondary school mathematics teachers. The course provides an introduction to algebraic structures and modern abstract algebra, and links abstract algebra to the high-school curriculum in algebra. It follows the historical attempts of mathematicians to solve polynomial equations of higher degrees, attempts which resulted in the development of group theory and field theory by Galois and Abel. In other words, algebraic structures grew out of a need to solve certain problems, and proved to be a much more fruitful way of viewing them. This theorems in both group theory and field theory. Along the historical ‘journey’, many other major results in algebra in the past 150 years are introduced, and recent directions that current research in algebra is taking are highlighted. This course is part of a unique master’s program – the Rothschild-Weizmann Program – offered by the Weizmann Institute of Science, especially designed for practicing Israeli secondary school teachers. A major component of the program comprises mathematical studies tailored for the students at the program. The rationale and structure of the course Algebra: From Equations to Structures are described, and its relevance to teaching school algebra is examined by analyzing three kinds of data sources. The first are position papers written by the participating teachers regarding the relevance of advanced mathematics studies to expertise in classroom instruction. The second data source are didactic materials designed by the participating teachers in which they connected the mathematics learned in the mathematics courses to the school curriculum and teaching. The third date source are final projects carried out by the teachers based on material learned in the course.

Keywords: abstract algebra , linking abstract algebra and school mathematics, school algebra, secondary school mathematics, teacher professional development

Procedia PDF Downloads 133
24447 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services

Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme

Abstract:

Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.

Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing

Procedia PDF Downloads 96
24446 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks

Procedia PDF Downloads 217
24445 Model Predictive Controller for Pasteurization Process

Authors: Tesfaye Alamirew Dessie

Abstract:

Our study focuses on developing a Model Predictive Controller (MPC) and evaluating it against a traditional PID for a pasteurization process. Utilizing system identification from the experimental data, the dynamics of the pasteurization process were calculated. Using best fit with data validation, residual, and stability analysis, the quality of several model architectures was evaluated. The validation data fit the auto-regressive with exogenous input (ARX322) model of the pasteurization process by roughly 80.37 percent. The ARX322 model structure was used to create MPC and PID control techniques. After comparing controller performance based on settling time, overshoot percentage, and stability analysis, it was found that MPC controllers outperform PID for those parameters.

Keywords: MPC, PID, ARX, pasteurization

Procedia PDF Downloads 145
24444 Point Estimation for the Type II Generalized Logistic Distribution Based on Progressively Censored Data

Authors: Rana Rimawi, Ayman Baklizi

Abstract:

Skewed distributions are important models that are frequently used in applications. Generalized distributions form a class of skewed distributions and gain widespread use in applications because of their flexibility in data analysis. More specifically, the Generalized Logistic Distribution with its different types has received considerable attention recently. In this study, based on progressively type-II censored data, we will consider point estimation in type II Generalized Logistic Distribution (Type II GLD). We will develop several estimators for its unknown parameters, including maximum likelihood estimators (MLE), Bayes estimators and linear estimators (BLUE). The estimators will be compared using simulation based on the criteria of bias and Mean square error (MSE). An illustrative example of a real data set will be given.

Keywords: point estimation, type II generalized logistic distribution, progressive censoring, maximum likelihood estimation

Procedia PDF Downloads 188
24443 Didactic Suitability and Mathematics Through Robotics and 3D Printing

Authors: Blanco T. F., Fernández-López A.

Abstract:

Nowadays, education, motivated by the new demands of the 21st century, acquires a dimension that converts the skills that new generations may need into a huge and uncertain set of knowledge too broad to be entirety covered. Within this set, and as tools to reach them, we find Learning and Knowledge Technologies (LKT). Thus, in order to prepare students for an everchanging society in which the technological boom involves everything, it is essential to develop digital competence. Nevertheless LKT seems not to have found their place in the educational system. This work is aimed to go a step further in the research of the most appropriate procedures and resources for technological integration in the classroom. The main objective of this exploratory study is to analyze the didactic suitability (epistemic, cognitive, affective, interactional, mediational and ecological) for teaching and learning processes of mathematics with robotics and 3D printing. The analysis carried out is drawn from a STEAM (Science, Technology, Engineering, Art and Mathematics) project that has the Pilgrimage way to Santiago de Compostela as a common thread. The sample is made up of 25 Primary Education students (10 and 11 years old). A qualitative design research methodology has been followed, the sessions have been distributed according to the type of technology applied. Robotics has been focused towards learning two-dimensional mathematical notions while 3D design and printing have been oriented towards three-dimensional concepts. The data collection instruments used are evaluation rubrics, recordings, field notebooks and participant observation. Indicators of didactic suitability proposed by Godino (2013) have been used for the analysis of the data. In general, the results show a medium-high level of didactic suitability. Above these, a high mediational and cognitive suitability stands out, which led to a better understanding of the positions and relationships of three-dimensional bodies in space and the concept of angle. With regard to the other indicators of the didactic suitability, it should be noted that the interactional suitability would require more attention and the affective suitability a deeper study. In conclusion, the research has revealed great expectations around the combination of teaching-learning processes of mathematics and LKT. Although there is still a long way to go in terms of the provision of means and teacher training.

Keywords: 3D printing, didactic suitability, educational design, robotics

Procedia PDF Downloads 88
24442 Cybervetting and Online Privacy in Job Recruitment – Perspectives on the Current and Future Legislative Framework Within the EU

Authors: Nicole Christiansen, Hanne Marie Motzfeldt

Abstract:

In recent years, more and more HR professionals have been using cyber-vetting in job recruitment in an effort to find the perfect match for the company. These practices are growing rapidly, accessing a vast amount of data from social networks, some of which is privileged and protected information. Thus, there is a risk that the right to privacy is becoming a duty to manage your private data. This paper investigates to which degree a job applicant's fundamental rights are protected adequately in current and future legislation in the EU. This paper argues that current data protection regulations and forthcoming regulations on the use of AI ensure sufficient protection. However, even though the regulation on paper protects employees within the EU, the recruitment sector may not pay sufficient attention to the regulation as it not specifically targeting this area. Therefore, the lack of specific labor and employment regulation is a concern that the social partners should attend to.

Keywords: AI, cyber vetting, data protection, job recruitment, online privacy

Procedia PDF Downloads 71
24441 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)

Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim

Abstract:

This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.

Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm

Procedia PDF Downloads 388
24440 Estimation of Reservoirs Fracture Network Properties Using an Artificial Intelligence Technique

Authors: Reda Abdel Azim, Tariq Shehab

Abstract:

The main objective of this study is to develop a subsurface fracture map of naturally fractured reservoirs by overcoming the limitations associated with different data sources in characterising fracture properties. Some of these limitations are overcome by employing a nested neuro-stochastic technique to establish inter-relationship between different data, as conventional well logs, borehole images (FMI), core description, seismic attributes, and etc. and then characterise fracture properties in terms of fracture density and fractal dimension for each data source. Fracture density is an important property of a system of fracture network as it is a measure of the cumulative area of all the fractures in a unit volume of a fracture network system and Fractal dimension is also used to characterize self-similar objects such as fractures. At the wellbore locations, fracture density and fractal dimension can only be estimated for limited sections where FMI data are available. Therefore, artificial intelligence technique is applied to approximate the quantities at locations along the wellbore, where the hard data is not available. It should be noted that Artificial intelligence techniques have proven their effectiveness in this domain of applications.

Keywords: naturally fractured reservoirs, artificial intelligence, fracture intensity, fractal dimension

Procedia PDF Downloads 239
24439 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia

Authors: Tim Nedyalkov

Abstract:

A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. They are collecting, managing, and retaining large amounts of data in cloud environments makes information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.

Keywords: cloud compliance, cloud security, data governance, privacy protection

Procedia PDF Downloads 103
24438 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa

Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees

Abstract:

The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.

Keywords: solar energy, solar radiation, ERA-5, potential energy

Procedia PDF Downloads 197
24437 Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data

Authors: Fan Gao, Lior Pachter

Abstract:

The primary tool currently used to pre-process 10X Chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices, and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.

Keywords: single-cell, ATAC-seq, bioinformatics, open chromatin landscape, chromatin interactome

Procedia PDF Downloads 145
24436 Understanding Resilience in Vulnerable Business Settings: Systematic Literature Review in Small and Medium Enterprises

Authors: Muhammedamin Hussen Saad, Geoffrey Haagler, Onno Omta, Gerben Van Der Velde

Abstract:

Unfolding chaos and persistent disruptions pose threats to companies’ performance especially in vulnerable settings of SME’s particularly in developing countries. Attention for resilience research in the academic world has increased considerably during the last decade looking at the number of papers published. As we are interested in adding to the understanding of the foundation and development of the concept of resilience, we focus especially on structuring the literature of business resilience in those vulnerable settings. A well-structured systematic search & review procedure was deployed. First, we defined key search terms and applied these to multiple databases (Scopus, Web of Science, Google Scholar, Emerald, and Science Direct). To make our literature search more encompassing, we augmented with co-citation, reference checking including hand searching techniques. The paper offers (1) an overview of SMEs resilience literature from 2000 up to March 2017 comprising 88 articles, and (2) special attention, within that overview, to developing countries. This review concludes that resilience literature is very much diverse in definitions and its measurements, and is inconclusive about its influencing factors. Furthermore, resilience literature is based predominantly on research in the developed world. On the bases of how the concept resilience emerges from the literature we describe distinct features of resilience, give options to extend the theoretical bases of research into resilience and describe concrete ideas for further research.

Keywords: business resilience, systematic review, SMEs, developing countries

Procedia PDF Downloads 153
24435 Role of Geomatics in Architectural and Cultural Conservation

Authors: Shweta Lall

Abstract:

The intent of this paper is to demonstrate the role of computerized auxiliary science in advancing the desired and necessary alliance of historians, surveyors, topographers, and analysts of architectural conservation and management. The digital era practice of recording architectural and cultural heritage in view of its preservation, dissemination, and planning developments are discussed in this paper. Geomatics include practices like remote sensing, photogrammetry, surveying, Geographic Information System (GIS), laser scanning technology, etc. These all resources help in architectural and conservation applications which will be identified through various case studies analysed in this paper. The standardised outcomes and the methodologies using relevant case studies are listed and described. The main component of geomatics methodology adapted in conservation is data acquisition, processing, and presentation. Geomatics is used in a wide range of activities involved in architectural and cultural heritage – damage and risk assessment analysis, documentation, 3-D model construction, virtual reconstruction, spatial and structural decision – making analysis and monitoring. This paper will project the summary answers of the capabilities and limitations of the geomatics field in architectural and cultural conservation. Policy-makers, urban planners, architects, and conservationist not only need answers to these questions but also need to practice them in a predictable, transparent, spatially explicit and inexpensive manner.

Keywords: architectural and cultural conservation, geomatics, GIS, remote sensing

Procedia PDF Downloads 133
24434 Meta Mask Correction for Nuclei Segmentation in Histopathological Image

Authors: Jiangbo Shi, Zeyu Gao, Chen Li

Abstract:

Nuclei segmentation is a fundamental task in digital pathology analysis and can be automated by deep learning-based methods. However, the development of such an automated method requires a large amount of data with precisely annotated masks which is hard to obtain. Training with weakly labeled data is a popular solution for reducing the workload of annotation. In this paper, we propose a novel meta-learning-based nuclei segmentation method which follows the label correction paradigm to leverage data with noisy masks. Specifically, we design a fully conventional meta-model that can correct noisy masks by using a small amount of clean meta-data. Then the corrected masks are used to supervise the training of the segmentation model. Meanwhile, a bi-level optimization method is adopted to alternately update the parameters of the main segmentation model and the meta-model. Extensive experimental results on two nuclear segmentation datasets show that our method achieves the state-of-the-art result. In particular, in some noise scenarios, it even exceeds the performance of training on supervised data.

Keywords: deep learning, histopathological image, meta-learning, nuclei segmentation, weak annotations

Procedia PDF Downloads 126
24433 Virtual Co-Creation Model in Hijab Fashion Industry: Business Model Approach

Authors: Lisandy A. Suryana, Lidia Mayangsari, Santi Novani

Abstract:

Creative industry in Indonesia become an important aspect of the economy. One of the sectors of creative industry which give the highest contribution toward Indonesia’s GDP is fashion sector. In line with the target of Indonesia in 2020 to be the qibla’ of moeslem fashion of the world, all of the stakeholders of the business ecosystem should collaborate. Rather than focus on the internal aspects of producer, external aspects such as customers, government, community, etc. become important to be involved in the ecosystem to support the development and sustainability of those fashion sector. Unfortunately, although Indonesia has the biggest moeslem population, the number of hijab business penetration only 10%. Therefore, this research aims to analyze and develop the virtual co-creation platform for hijab creative industry as the strategy to achieve sustainability and increase the market share. This preliminary research describes the main stakeholders in the hijab creative industry based on business model approach. This business model is adapted by considering the service science context, and the data is collected by using the qualitative approach especially in-depth interview. This business model shows the relationship between resource integration, value co-creation, the value proposition of the company, and also the financial aspect of the business.

Keywords: value co-creation, Hijab Fashion Industry, creative industry, service business model, business model canvas

Procedia PDF Downloads 367
24432 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 150
24431 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic

Authors: Diogen Babuc

Abstract:

The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. Methods: The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigenère. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e., shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b+1, it’ll return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Results: Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character isn’t used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it’s questionable if it works better than the other methods from the point of view of execution time and storage space.

Keywords: ciphering, deciphering, authentic, algorithm, polyalphabetic cipher, random key, methods comparison

Procedia PDF Downloads 91
24430 The Twin Terminal of Pedestrian Trajectory Based on City Intelligent Model (CIM) 4.0

Authors: Chen Xi, Liu Xuebing, Lao Xueru, Kuan Sinman, Jiang Yike, Wang Hanwei, Yang Xiaolang, Zhou Junjie, Xie Jinpeng

Abstract:

To further promote the development of smart cities, the microscopic "nerve endings" of the City Intelligent Model (CIM) are extended to be more sensitive. In this paper, we develop a pedestrian trajectory twin terminal based on the CIM and CNN technology. It also uses 5G networks, architectural and geoinformatics technologies, convolutional neural networks, combined with deep learning networks for human behavior recognition models, to provide empirical data such as 'pedestrian flow data and human behavioral characteristics data', and ultimately form spatial performance evaluation criteria and spatial performance warning systems, to make the empirical data accurate and intelligent for prediction and decision making.

Keywords: urban planning, urban governance, CIM, artificial intelligence, sustainable development

Procedia PDF Downloads 383
24429 An Extended Inverse Pareto Distribution, with Applications

Authors: Abdel Hadi Ebraheim

Abstract:

This paper introduces a new extension of the Inverse Pareto distribution in the framework of Marshal-Olkin (1997) family of distributions. This model is capable of modeling various shapes of aging and failure data. The statistical properties of the new model are discussed. Several methods are used to estimate the parameters involved. Explicit expressions are derived for different types of moments of value in reliability analysis are obtained. Besides, the order statistics of samples from the new proposed model have been studied. Finally, the usefulness of the new model for modeling reliability data is illustrated using two real data sets with simulation study.

Keywords: pareto distribution, marshal-Olkin, reliability, hazard functions, moments, estimation

Procedia PDF Downloads 71
24428 Potential Determinants of Research Output: Comparing Economics and Business

Authors: Osiris Jorge Parcero, Néstor Gandelman, Flavia Roldán, Josef Montag

Abstract:

This paper uses cross-country unbalanced panel data of up to 146 countries over the period 1996 to 2015 to be the first study to identify potential determinants of a country’s relative research output in Economics versus Business. More generally, it is also one of the first studies comparing Economics and Business. The results show that better policy-related data availability, higher income inequality, and lower ethnic fractionalization relatively favor economics. The findings are robust to two alternative fixed effects specifications, three alternative definitions of economics and business, two alternative measures of research output (publications and citations), and the inclusion of meaningful control variables. To the best of our knowledge, our paper is also the first to demonstrate the importance of policy-related data as drivers of economic research. Our regressions show that the availability of this type of data is the single most important factor associated with the prevalence of economics over business as a research domain. Thus, our work has policy implications, as the availability of policy-related data is partially under policy control. Moreover, it has implications for students, professionals, universities, university departments, and research-funding agencies that face choices between profiles oriented toward economics and those oriented toward business. Finally, the conclusions show potential lines for further research.

Keywords: research output, publication performance, bibliometrics, economics, business, policy-related data

Procedia PDF Downloads 119
24427 Assessment of Routine Health Information System (RHIS) Quality Assurance Practices in Tarkwa Sub-Municipal Health Directorate, Ghana

Authors: Richard Okyere Boadu, Judith Obiri-Yeboah, Kwame Adu Okyere Boadu, Nathan Kumasenu Mensah, Grace Amoh-Agyei

Abstract:

Routine health information system (RHIS) quality assurance has become an important issue, not only because of its significance in promoting a high standard of patient care but also because of its impact on government budgets for the maintenance of health services. A routine health information system comprises healthcare data collection, compilation, storage, analysis, report generation, and dissemination on a routine basis in various healthcare settings. The data from RHIS give a representation of health status, health services, and health resources. The sources of RHIS data are normally individual health records, records of services delivered, and records of health resources. Using reliable information from routine health information systems is fundamental in the healthcare delivery system. Quality assurance practices are measures that are put in place to ensure the health data that are collected meet required quality standards. Routine health information system quality assurance practices ensure that data that are generated from the system are fit for use. This study considered quality assurance practices in the RHIS processes. Methods: A cross-sectional study was conducted in eight health facilities in Tarkwa Sub-Municipal Health Service in the western region of Ghana. The study involved routine quality assurance practices among the 90 health staff and management selected from facilities in Tarkwa Sub-Municipal who collected or used data routinely from 24th December 2019 to 20th January 2020. Results: Generally, Tarkwa Sub-Municipal health service appears to practice quality assurance during data collection, compilation, storage, analysis and dissemination. The results show some achievement in quality control performance in report dissemination (77.6%), data analysis (68.0%), data compilation (67.4%), report compilation (66.3%), data storage (66.3%) and collection (61.1%). Conclusions: Even though the Tarkwa Sub-Municipal Health Directorate engages in some control measures to ensure data quality, there is a need to strengthen the process to achieve the targeted percentage of performance (90.0%). There was a significant shortfall in quality assurance practices performance, especially during data collection, with respect to the expected performance.

Keywords: quality assurance practices, assessment of routine health information system quality, routine health information system, data quality

Procedia PDF Downloads 61
24426 Heart Failure Identification and Progression by Classifying Cardiac Patients

Authors: Muhammad Saqlain, Nazar Abbas Saqib, Muazzam A. Khan

Abstract:

Heart Failure (HF) has become the major health problem in our society. The prevalence of HF has increased as the patient’s ages and it is the major cause of the high mortality rate in adults. A successful identification and progression of HF can be helpful to reduce the individual and social burden from this syndrome. In this study, we use a real data set of cardiac patients to propose a classification model for the identification and progression of HF. The data set has divided into three age groups, namely young, adult, and old and then each age group have further classified into four classes according to patient’s current physical condition. Contemporary Data Mining classification algorithms have been applied to each individual class of every age group to identify the HF. Decision Tree (DT) gives the highest accuracy of 90% and outperform all other algorithms. Our model accurately diagnoses different stages of HF for each age group and it can be very useful for the early prediction of HF.

Keywords: decision tree, heart failure, data mining, classification model

Procedia PDF Downloads 392
24425 Critically Analyzing the Application of Big Data for Smart Transportation: A Case Study of Mumbai

Authors: Tanuj Joshi

Abstract:

Smart transportation is fast emerging as a solution to modern cities’ approach mobility issues, delayed emergency response rate and high congestion on streets. Present day scenario with Google Maps, Waze, Yelp etc. demonstrates how information and communications technologies controls the intelligent transportation system. This intangible and invisible infrastructure is largely guided by the big data analytics. On the other side, the exponential increase in Indian urban population has intensified the demand for better services and infrastructure to satisfy the transportation needs of its citizens. No doubt, India’s huge internet usage is looked as an important resource to guide to achieve this. However, with a projected number of over 40 billion objects connected to the Internet by 2025, the need for systems to handle massive volume of data (big data) also arises. This research paper attempts to identify the ways of exploiting the big data variables which will aid commuters on Indian tracks. This study explores real life inputs by conducting survey and interviews to identify which gaps need to be targeted to better satisfy the customers. Several experts at Mumbai Metropolitan Region Development Authority (MMRDA), Mumbai Metro and Brihanmumbai Electric Supply and Transport (BEST) were interviewed regarding the Information Technology (IT) systems currently in use. The interviews give relevant insights and requirements into the workings of public transportation systems whereas the survey investigates the macro situation.

Keywords: smart transportation, mobility issue, Mumbai transportation, big data, data analysis

Procedia PDF Downloads 165
24424 The Philosophical Hermeneutics Contribution to Form a Highly Qualified Judiciary in Brazil

Authors: Thiago R. Pereira

Abstract:

The philosophical hermeneutics is able to change the Brazilian Judiciary because of the understanding of the characteristics of the human being. It is impossible for humans, to be invested in the function of being a judge, making absolutely neutral decisions, but the philosophical hermeneutics can assist the judge making impartial decisions, based on the federal constitution. The normative legal positivism imagined a neutral judge, a judge able to try without any preconceived ideas, without allowing his/her background to influence him/her. When a judge arbitrates based on legal rules, the problem is smaller, but when there are no clear legal rules, and the judge must try based on principles, the risk of the decision is based on what they believe in. Solipsistically, this issue gains a huge dimension. Today, the Brazilian judiciary is independent, but there must be a greater knowledge of philosophy and the philosophy of law, partially because the bigger problem is the unpredictability of decisions made by the judiciary. Actually, when a lawsuit is filed, the result of this judgment is absolutely unpredictable. It is almost a gamble. There must be the slightest legal certainty and predictability of judicial decisions, so that people, with similar cases, may not receive opposite sentences. The relativism, since classical antiquity, believes in the possibility of multiple answers. Since the Greeks in in the sixth century before Christ, through the Germans in the eighteenth century, and even today, it has been established the constitution as the great law, the Groundnorm, and thus, the relativism of life can be greatly reduced when a hermeneut uses the Constitution as North interpretational, where all interpretation must act as the hermeneutic constitutional filter. For a current philosophy of law, that inside a legal system with a Federal Constitution, there is a single correct answer to a specific case. The challenge is how to find this right answer. The only answer to this question will be that we should use the constitutional principles. But in many cases, a collision between principles will take place, and to resolve this issue, the judge or the hermeneut will choose a solipsism way, using what they personally believe to be the right one. For obvious reasons, that conduct is not safe. Thus, a theory of decision is necessary to seek justice, and the hermeneutic philosophy and the linguistic turn will be necessary for one to find the right answer. In order to help this difficult mission, it will be necessary to use philosophical hermeneutics in order to find the right answer, which is the constitutionally most appropriate response. The constitutionally appropriate response will not always be the answer that individuals agree to, but we must put aside our preferences and defend the answer that the Constitution gives us. Therefore, the hermeneutics applied to Law, in search constitutionally appropriate response, should be the safest way to avoid judicial individual decisions. The aim of this paper is to present the science of law starting from the linguistic turn, the philosophical hermeneutics, moving away from legal positivism. The methodology used in this paper is qualitative, academic and theoretical, philosophical hermeneutics with the mission to conduct research proposing a new way of thinking about the science of law. The research sought to demonstrate the difficulty of the Brazilian courts to depart from the secular influence of legal positivism. Moreover, the research sought to demonstrate the need to think science of law within a contemporary perspective, where the linguistic turn, philosophical hermeneutics, will be the surest way to conduct the science of law in the present century.

Keywords: hermeneutic, right answer, solipsism, Brazilian judiciary

Procedia PDF Downloads 333