Search results for: linked data
24809 Road Safety in the Great Britain: An Exploratory Data Analysis
Authors: Jatin Kumar Choudhary, Naren Rayala, Abbas Eslami Kiasari, Fahimeh Jafari
Abstract:
The Great Britain has one of the safest road networks in the world. However, the consequences of any death or serious injury are devastating for loved ones, as well as for those who help the severely injured. This paper aims to analyse the Great Britain's road safety situation and show the response measures for areas where the total damage caused by accidents can be significantly and quickly reduced. In this paper, we do an exploratory data analysis using STATS19 data. For the past 30 years, the UK has had a good record in reducing fatalities. The UK ranked third based on the number of road deaths per million inhabitants. There were around 165,000 accidents reported in the Great Britain in 2009 and it has been decreasing every year until 2019 which is under 120,000. The government continues to scale back road deaths empowering responsible road users by identifying and prosecuting the parameters that make the roads less safe.Keywords: road safety, data analysis, openstreetmap, feature expanding.
Procedia PDF Downloads 14324808 Intrusion Detection System Using Linear Discriminant Analysis
Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou
Abstract:
Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99
Procedia PDF Downloads 23324807 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —
Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno
Abstract:
STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.Keywords: rule induction, decision table, missing data, noise
Procedia PDF Downloads 39724806 Commodification of the Chinese Language: Investigating Language Ideology in the Chinese Complementary Schools’ Online Discourse
Authors: Yuying Liu
Abstract:
Despite the increasing popularity of Chinese and the recognition of the growing commodifying ideology of Chinese language in many contexts (Liu and Gao, 2020; Guo, Shin and Shen 2020), the ideological orientations of the Chinese diaspora community towards the Chinese language remain under-researched. This research contributes seeks to bridge this gap by investigating the micro-level language ideologies embedded in the Chinese complementary schools in the Republic of Ireland. Informed by Ruíz’s (1984) metaphorical representations of language, 11 Chinese complementary schools’ websites were analysed as discursive texts that signal the language policy and ideology to prospective learners and parents were analysed. The results of the analysis suggest that a move from a portrayal of Chinese as linked to student heritage identity, to the commodification of linguistic and cultural diversity, is evident. It denotes the growing commodifying ideology among the Chinese complementary schools in the Republic of Ireland. The changing profile of the complementary school, from serving an ethnical community to teaching Chinese as a foreign language for the wider community, indicates the possibility of creating the a positive synergy between the Complementary school and the mainstream education. This study contributes to the wider discussions of language ideology and language planning, with regards to modern language learning and heritage language maintenance.Keywords: the Chinese language;, Chinese as heritage language, Chinese as foreign language, Chinese community schools
Procedia PDF Downloads 14224805 Gender Difference in the Association between Different Components of the Metabolic Syndrome and Vitamin D Levels in Saudi Patients
Authors: Amal Baalash, Shazia Mukaddam, M. Adel El-Sayed
Abstract:
Background: Several studies have suggested non-skeletal effects of vitamin D and linked its deficiency with features of many chronic conditions. In this study, We aimed to investigate the relationship between vitamin D levels and different components of the metabolic syndrome in male and female Saudi patients. Methods: the study population consisted of 111 patients with metabolic syndrome (71 females and 40 males) aged 37-63 years enrolled from patients attending the internal medicine outpatient clinics of King Fahad Medical City. The parameters for diagnosis of the metabolic syndrome according to the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) were measured, which included waist circumference, TG, HDL-C, Blood pressure and fasting blood glucose (FBS). The association between each parameter and serum 25-hydroxyvitamin D (25(OH) D) was studied in both male and female patients separately. Results: in male patients, 25(OH) D levels were inversely associated with FBS and TG and positively associated with HDL-C and diastolic blood pressure, With highest association with the HDL-C levels. On the other hand 25(OH) D, Showed no significant association with any of the measured metabolic syndrome parameters in female patients. Conclusion: in Saudi patients with metabolic syndrome, the association between the parameters of metabolic syndrome and the levels of 25 (OH) D is more pronounced in males rather than females.Keywords: gender, metabolic syndrome, Saudi patients, vitamin D
Procedia PDF Downloads 38024804 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services
Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme
Abstract:
Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing
Procedia PDF Downloads 11824803 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform
Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu
Abstract:
Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks
Procedia PDF Downloads 23224802 Model Predictive Controller for Pasteurization Process
Authors: Tesfaye Alamirew Dessie
Abstract:
Our study focuses on developing a Model Predictive Controller (MPC) and evaluating it against a traditional PID for a pasteurization process. Utilizing system identification from the experimental data, the dynamics of the pasteurization process were calculated. Using best fit with data validation, residual, and stability analysis, the quality of several model architectures was evaluated. The validation data fit the auto-regressive with exogenous input (ARX322) model of the pasteurization process by roughly 80.37 percent. The ARX322 model structure was used to create MPC and PID control techniques. After comparing controller performance based on settling time, overshoot percentage, and stability analysis, it was found that MPC controllers outperform PID for those parameters.Keywords: MPC, PID, ARX, pasteurization
Procedia PDF Downloads 16724801 Pre-Cancerigene Injuries Related to Human Papillomavirus: Importance of Cervicography as a Complementary Diagnosis Method
Authors: Denise De Fátima Fernandes Barbosa, Tyane Mayara Ferreira Oliveira, Diego Jorge Maia Lima, Paula Renata Amorim Lessa, Ana Karina Bezerra Pinheiro, Cintia Gondim Pereira Calou, Glauberto Da Silva Quirino, Hellen Lívia Oliveira Catunda, Tatiana Gomes Guedes, Nicolau Da Costa
Abstract:
The aim of this study is to evaluate the use of Digital Cervicography (DC) in the diagnosis of precancerous lesions related to Human Papillomavirus (HPV). Cross-sectional study with a quantitative approach, of evaluative type, held in a health unit linked to the Pro Dean of Extension of the Federal University of Ceará, in the period of July to August 2015 with a sample of 33 women. Data collecting was conducted through interviews with enforcement tool. Franco (2005) standardized the technique used for DC. Polymerase Chain Reaction (PCR) was performed to identify high-risk HPV genotypes. DC were evaluated and classified by 3 judges. The results of DC and PCR were classified as positive, negative or inconclusive. The data of the collecting instruments were compiled and analyzed by the software Statistical Package for Social Sciences (SPSS) with descriptive statistics and cross-references. Sociodemographic, sexual and reproductive variables were analyzed through absolute frequencies (N) and their respective percentage (%). Kappa coefficient (κ) was applied to determine the existence of agreement between the DC of reports among evaluators with PCR and also among the judges about the DC results. The Pearson's chi-square test was used for analysis of sociodemographic, sexual and reproductive variables with the PCR reports. It was considered statistically significant (p<0.05). Ethical aspects of research involving human beings were respected, according to 466/2012 Resolution. Regarding the socio-demographic profile, the most prevalent ages and equally were those belonging to the groups 21-30 and 41-50 years old (24.2%). The brown color was reported in excess (84.8%) and 96.9% out of them had completed primary and secondary school or studying. 51.5% were married, 72.7% Catholic, 54.5% employed and 48.5% with income between one and two minimum wages. As for the sexual and reproductive characteristics, prevailed heterosexual (93.9%) who did not use condoms during sexual intercourse (72.7%). 51.5% had a previous history of Sexually Transmitted Infection (STI), and HPV the most prevalent STI (76.5%). 57.6% did not use contraception, 78.8% underwent examination Cancer Prevention Uterus (PCCU) with shorter time interval or equal to one year, 72.7% had no cases of Cervical Cancer in the family, 63.6% were multiparous and 97% were not vaccinated against HPV. DC identified good level of agreement between raters (κ=0.542), had a specificity of 77.8% and sensitivity of 25% when compared their results with PCR. Only the variable race showed a statistically significant association with CRP (p=0.042). DC had 100% acceptance amongst women in the sample, revealing the possibility of other experiments in using this method so that it proves as a viable technique. The DC positivity criteria were developed by nurses and these professionals also perform PCCU in Brazil, which means that DC can be an important complementary diagnostic method for the appreciation of these professional’s quality of examinations.Keywords: gynecological examination, human papillomavirus, nursing, papillomavirus infections, uterine lasmsneop
Procedia PDF Downloads 30424800 Point Estimation for the Type II Generalized Logistic Distribution Based on Progressively Censored Data
Authors: Rana Rimawi, Ayman Baklizi
Abstract:
Skewed distributions are important models that are frequently used in applications. Generalized distributions form a class of skewed distributions and gain widespread use in applications because of their flexibility in data analysis. More specifically, the Generalized Logistic Distribution with its different types has received considerable attention recently. In this study, based on progressively type-II censored data, we will consider point estimation in type II Generalized Logistic Distribution (Type II GLD). We will develop several estimators for its unknown parameters, including maximum likelihood estimators (MLE), Bayes estimators and linear estimators (BLUE). The estimators will be compared using simulation based on the criteria of bias and Mean square error (MSE). An illustrative example of a real data set will be given.Keywords: point estimation, type II generalized logistic distribution, progressive censoring, maximum likelihood estimation
Procedia PDF Downloads 20224799 Minors and Terrorism: A Discussion about the Recruitment and Resilience
Authors: Marta Maria Aguilar Carceles
Abstract:
This theoretical study argues how terrorism is rising around the world and which are the factors and situations that contribute to this process. Linked to aspects of human development, minors are one of the most vulnerable collectives to be engaged for this purpose. Its special weakness and lower possibility of self-defense makes them more likely to become victims as a result of a brainwashing process. Terrorism is an illicit way to achieve political and social changes and new technologies and available resources make it easier to spread. In this sense, throughout a theoretical revision of different recent and scientific articles, it is evaluated which risk factors can provoke its affiliation and later develop of antisocial and illicit behavior. An example of this group of factors could be the inter-generational continuity between parents and their children, as well as the sociodemographic aspects joined to cultural experiences (i.e. sense of dishonor, frustration, etc.). The assess of this kind of variables must be accompanied by the evaluation of protective factors, because the reasons through one person decides to join to terrorism are inherently idiosyncratic and we can only install mechanisms of prevention knowing those personal characteristics. To sum, both aspects underline the relevance of the internalizing and externalizing personal factors, each of them in one specific direction: a) to increase the possibility of being recruited or follow this type of criminal group by himself, and b) to be able of avoiding the effects and consequences of terrorism thanks to personal and resilient characteristics (resilience).Keywords: criminality, minors, recruitment, resilience, terrorism
Procedia PDF Downloads 13724798 Omni: Data Science Platform for Evaluate Performance of a LoRaWAN Network
Authors: Emanuele A. Solagna, Ricardo S, Tozetto, Roberto dos S. Rabello
Abstract:
Nowadays, physical processes are becoming digitized by the evolution of communication, sensing and storage technologies which promote the development of smart cities. The evolution of this technology has generated multiple challenges related to the generation of big data and the active participation of electronic devices in society. Thus, devices can send information that is captured and processed over large areas, but there is no guarantee that all the obtained data amount will be effectively stored and correctly persisted. Because, depending on the technology which is used, there are parameters that has huge influence on the full delivery of information. This article aims to characterize the project, currently under development, of a platform that based on data science will perform a performance and effectiveness evaluation of an industrial network that implements LoRaWAN technology considering its main parameters configuration relating these parameters to the information loss.Keywords: Internet of Things, LoRa, LoRaWAN, smart cities
Procedia PDF Downloads 15424797 Application of Intelligent City and Hierarchy Intelligent Buildings in Kuala Lumpur
Authors: Jalalludin Abdul Malek, Zurinah Tahir
Abstract:
When the Multimedia Super Corridor (MSC) was launched in 1995, it became the catalyst for the implementation of the intelligent city concept, an area that covers about 15 x 50 kilometres from Kuala Lumpur City Centre (KLCC), Putrajaya and Kuala Lumpur International Airport (KLIA). The concept of intelligent city means that the city has an advanced infrastructure and infostructure such as information technology, advanced telecommunication systems, electronic technology and mechanical technology to be utilized for the development of urban elements such as industries, health, services, transportation and communications. For example, the Golden Triangle of Kuala Lumpur has also many intelligent buildings developed by the private sector such as the KLCC Tower to implement the intelligent city concept. Consequently, the intelligent buildings in the Golden Triangle can be linked directly to the Putrajaya Intelligent City and Cyberjaya Intelligent City within the confines of the MSC. However, the reality of the situation is that there are not many intelligent buildings within the Golden Triangle Kuala Lumpur scope which can be considered of high-standard intelligent buildings as referred to by the Intelligence Quotient (IQ) building standard. This increases the need to implement the real ‘intelligent city’ concept. This paper aims to show the strengths and weaknesses of the intelligent buildings in the Golden Triangle by taking into account aspects of 'intelligence' in the areas of technology and infrastructure of buildings.Keywords: intelligent city concepts, intelligent building, Golden Triangle, Kuala Lumpur
Procedia PDF Downloads 30024796 Modeling Spillover Effects of Pakistan-India Bilateral Trade upon Sustainability of Economic Growth in Pakistan
Authors: Taimoor Hussain Alvi, Syed Toqueer Akhter
Abstract:
The focus of this research is to identify Pak-India bilateral trade spillover effects upon Pakistan’s Growth rate. Cross-country spillover growth Effects have been linked with openness and access to markets. In this research, we intend to see the short run and long run effects of Pak-India Bilateral Trade Openness upon economic growth in Pakistan. Trade Openness has been measured as the sum of bilateral exports and imports between the two countries. Increased emphasis on the condition and environment of financial markets is laid in light of globalization and trade liberalization. This research paper makes use of the Univariate Autoregressive Distributed Lagged Model to analyze the effects of bilateral trade variables upon the growth pattern of Pakistan in the short run and long run. Key findings of the study empirically support the notion that increased bilateral trade will be beneficial for Pakistan in the short run because of cost advantage and knowledge spillover in terms of increased technical and managerial ability from multinational firms. However, contrary to extensive literature, increased bilateral trade measures will affect Pakistan’s growth rate negatively in the long run because of the industrial size differential and increased integration of Indian economy with the world.Keywords: bilateral trade openness, spillover, comparative advantage, univariate
Procedia PDF Downloads 48624795 Cybervetting and Online Privacy in Job Recruitment – Perspectives on the Current and Future Legislative Framework Within the EU
Authors: Nicole Christiansen, Hanne Marie Motzfeldt
Abstract:
In recent years, more and more HR professionals have been using cyber-vetting in job recruitment in an effort to find the perfect match for the company. These practices are growing rapidly, accessing a vast amount of data from social networks, some of which is privileged and protected information. Thus, there is a risk that the right to privacy is becoming a duty to manage your private data. This paper investigates to which degree a job applicant's fundamental rights are protected adequately in current and future legislation in the EU. This paper argues that current data protection regulations and forthcoming regulations on the use of AI ensure sufficient protection. However, even though the regulation on paper protects employees within the EU, the recruitment sector may not pay sufficient attention to the regulation as it not specifically targeting this area. Therefore, the lack of specific labor and employment regulation is a concern that the social partners should attend to.Keywords: AI, cyber vetting, data protection, job recruitment, online privacy
Procedia PDF Downloads 9124794 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)
Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim
Abstract:
This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm
Procedia PDF Downloads 40424793 Estimation of Reservoirs Fracture Network Properties Using an Artificial Intelligence Technique
Authors: Reda Abdel Azim, Tariq Shehab
Abstract:
The main objective of this study is to develop a subsurface fracture map of naturally fractured reservoirs by overcoming the limitations associated with different data sources in characterising fracture properties. Some of these limitations are overcome by employing a nested neuro-stochastic technique to establish inter-relationship between different data, as conventional well logs, borehole images (FMI), core description, seismic attributes, and etc. and then characterise fracture properties in terms of fracture density and fractal dimension for each data source. Fracture density is an important property of a system of fracture network as it is a measure of the cumulative area of all the fractures in a unit volume of a fracture network system and Fractal dimension is also used to characterize self-similar objects such as fractures. At the wellbore locations, fracture density and fractal dimension can only be estimated for limited sections where FMI data are available. Therefore, artificial intelligence technique is applied to approximate the quantities at locations along the wellbore, where the hard data is not available. It should be noted that Artificial intelligence techniques have proven their effectiveness in this domain of applications.Keywords: naturally fractured reservoirs, artificial intelligence, fracture intensity, fractal dimension
Procedia PDF Downloads 25724792 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia
Authors: Tim Nedyalkov
Abstract:
A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. They are collecting, managing, and retaining large amounts of data in cloud environments makes information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.Keywords: cloud compliance, cloud security, data governance, privacy protection
Procedia PDF Downloads 12224791 Grains of Winter Wheat Spelt (Triticum spelta L.) for Save Food Production
Authors: D. Jablonskytė-Raščė, A. Mankevičienė, S. Supronienė, I. Kerienė, S. Maikštėnienė, S. Bliznikas, R. Česnulevičienė
Abstract:
Organic farming does not allow the use of conventional mineral fertilizers and crop protection products. As a result, in our experiments we chose to grow different species of cereals and to see how cereal species affects mycotoxin accumulation. From the phytopathological and entomological viewpoint, the glumes of spelt grain perform a positive role since they protect grain from the infection of pathogenic microorganisms. On the background of the above-mentioned infection, there were more Fusarium–affected grains of spelt than of common wheat. It can be assumed that spelt is more susceptible to the Fusarium fungi infection than common wheat. This study describes the occurrence of DON, ZEA and T2/HT2 toxin in a survey of spelt and common wheat and their bran as well as flour. The analysis was conducted using the enzyme-linked immunosorbent assay (ELISA) method. The concentrations of DON, ZEA, and T2/HT2 in Triticum spelta and Triticum aestivum are influenced by species, cereal type and year interaction. The highest concentration of mycotoxin was found in spelt grain with glumes. The obtained results indicate the significantly higher concentrations of Fusarium toxins in glumes than in dehulled grain which implicate the possible protective effect of spelt wheat glumes. The lowest DON, ZEA, and T2/HT2 concentration was determined in spelt grain without glumes.Keywords: Fusarium mycotoxins, organic farming, spelt
Procedia PDF Downloads 31624790 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa
Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees
Abstract:
The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.Keywords: solar energy, solar radiation, ERA-5, potential energy
Procedia PDF Downloads 21624789 Serological Assay and Genotyping of Hepatitis C Virus in Infected Patients in Zanjan Province
Authors: Abdolreza Esmaeilzadeh, Maryam Erfanmanesh, Sousan Ghasemi, Farzaneh Mohammadi
Abstract:
Background: Hepatitis C Virus (HCV), a public health problem, is an enveloped, single-stranded RNA virus and a member of the Hepacivirus genus of the Flaviviridae family. Liver cancer, cirrhosis, and end-stage liver are the outcomes of chronic infection with HCV. HCV isolates show significant heterogeneity in genetics around the world. Therefore, determining HCV genotypes is a vital step in determining prognosis and planning therapeutic strategies. Materials and Methods: Serum samples of 136 patients were collected and analyzed for anti-HCV antibodies using ELISA (The enzyme-linked immunosorbent assay) method. Then, positive samples were exposed to RT-PCR, which was performed under standard condition. Afterwards, they investigated for genotyping using allele-specific PCR (AS-PCR), and HCV genotype 2.0 line probe assay (LiPA). Results: Samples indicated 216 bp bands on 2% agarose gel. Analyses of the results demonstrated that the most dominant subtype was 3a with frequency of 38.26% in Zanjan Province followed by subtypes of 1b, 1a, 2, and 4 with frequencies of 25.73%, 22.05%, 5.14%, and 4.41%, respectively. The frequency of unknown HCV genotypes was 4.41%. Conclusions: According to the results, it was found that HCV high prevalent genotype in Zanjan is subtype 3a. Analysis of the results provides identification of certain HCV genotypes, and these valuable findings could affect the type and duration of the treatment.Keywords: anti-HCV antibody, Hepatitis C Virus (HCV), genotype, RT-PCR, AS-PCR
Procedia PDF Downloads 49224788 Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data
Authors: Fan Gao, Lior Pachter
Abstract:
The primary tool currently used to pre-process 10X Chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices, and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.Keywords: single-cell, ATAC-seq, bioinformatics, open chromatin landscape, chromatin interactome
Procedia PDF Downloads 15824787 Fabrication of Nanostructured Arrays Using Si-Containing Block Copolymer and Dually Responsive Photoresist
Authors: Kyoungok Jung, Chang Hong Bak, Gyeong Cheon Jo, Jin-Baek Kim
Abstract:
Nanostructured arrays have drawn extensive attention because of their unique properties resulting from nanoscale features. However, it is difficult to achieve uniform and freestanding 1D nanostrcutures over a large area. Here, a simple and novel method was developed for fabrication of universal nanoporous templates for high-density nanostructure arrays, by combining self-assembly of a Si-containing block copolymer with a bilayer lithography system. We introduced a dually responsive photoresist bottom layer into which the nanopatterns of block copolymer are transferred by oxygen reactive ion etching. Because the dually responsive layer becomes cross-linked by heating, it can be used as a hard template during the etching process. It becomes soluble again by chain scission upon exposure to light. Therefore, it can be easily removed by the lift-off process. The template was applicable to the various conducting substrates due to the compatibility of the photoresist with a wide range of substrates and was used in electrodeposition for well-aligned and high-density inorganic and organic nanoarrays. We successfully obtained vertically aligned and highly ordered gold nanorods and polypyrrole dots on the substrate without aggregation, and these arrays did not collapse after removing the dually responsive templates by the simple lift-off process.Keywords: block copolymer, dually responsive, nanostructure, photoresist
Procedia PDF Downloads 25924786 Meta Mask Correction for Nuclei Segmentation in Histopathological Image
Authors: Jiangbo Shi, Zeyu Gao, Chen Li
Abstract:
Nuclei segmentation is a fundamental task in digital pathology analysis and can be automated by deep learning-based methods. However, the development of such an automated method requires a large amount of data with precisely annotated masks which is hard to obtain. Training with weakly labeled data is a popular solution for reducing the workload of annotation. In this paper, we propose a novel meta-learning-based nuclei segmentation method which follows the label correction paradigm to leverage data with noisy masks. Specifically, we design a fully conventional meta-model that can correct noisy masks by using a small amount of clean meta-data. Then the corrected masks are used to supervise the training of the segmentation model. Meanwhile, a bi-level optimization method is adopted to alternately update the parameters of the main segmentation model and the meta-model. Extensive experimental results on two nuclear segmentation datasets show that our method achieves the state-of-the-art result. In particular, in some noise scenarios, it even exceeds the performance of training on supervised data.Keywords: deep learning, histopathological image, meta-learning, nuclei segmentation, weak annotations
Procedia PDF Downloads 14324785 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.Keywords: classification, achine learning, predictive quality, feature selection
Procedia PDF Downloads 16624784 Safety Tolerance Zone for Driver-Vehicle-Environment Interactions under Challenging Conditions
Authors: Matjaž Šraml, Marko Renčelj, Tomaž Tollazzi, Chiara Gruden
Abstract:
Road safety is a worldwide issue with numerous and heterogeneous factors influencing it. On the side, driver state – comprising distraction/inattention, fatigue, drowsiness, extreme emotions, and socio-cultural factors highly affect road safety. On the other side, the vehicle state has an important role in mitigating (or not) the road risk. Finally, the road environment is still one of the main determinants of road safety, defining driving task complexity. At the same time, thanks to technological development, a lot of detailed data is easily available, creating opportunities for the detection of driver state, vehicle characteristics and road conditions and, consequently, for the design of ad hoc interventions aimed at improving driver performance, increase awareness and mitigate road risks. This is the challenge faced by the i-DREAMS project. i-DREAMS, which stands for a smart Driver and Road Environment Assessment and Monitoring System, is a 3-year project funded by the European Union’s Horizon 2020 research and innovation program. It aims to set up a platform to define, develop, test and validate a ‘Safety Tolerance Zone’ to prevent drivers from getting too close to the boundaries of unsafe operation by mitigating risks in real-time and after the trip. After the definition and development of the Safety Tolerance Zone concept and the concretization of the same in an Advanced driver-assistance system (ADAS) platform, the system was tested firstly for 2 months in a driving simulator environment in 5 different countries. After that, naturalistic driving studies started for a 10-month period (comprising a 1-month pilot study, 3-month baseline study and 6 months study implementing interventions). Currently, the project team has approved a common evaluation approach, and it is developing the assessment of the usage and outcomes of the i-DREAMS system, which is turning positive insights. The i-DREAMS consortium consists of 13 partners, 7 engineering universities and research groups, 4 industry partners and 2 partners (European Transport Safety Council - ETSC - and POLIS cities and regions for transport innovation) closely linked to transport safety stakeholders, covering 8 different countries altogether.Keywords: advanced driver assistant systems, driving simulator, safety tolerance zone, traffic safety
Procedia PDF Downloads 7124783 Secure Data Sharing of Electronic Health Records With Blockchain
Authors: Kenneth Harper
Abstract:
The secure sharing of Electronic Health Records (EHRs) is a critical challenge in modern healthcare, demanding solutions to enhance interoperability, privacy, and data integrity. Traditional standards like Health Information Exchange (HIE) and HL7 have made significant strides in facilitating data exchange between healthcare entities. However, these approaches rely on centralized architectures that are often vulnerable to data breaches, lack sufficient privacy measures, and have scalability issues. This paper proposes a framework for secure, decentralized sharing of EHRs using blockchain technology, cryptographic tokens, and Non-Fungible Tokens (NFTs). The blockchain's immutable ledger, decentralized control, and inherent security mechanisms are leveraged to improve transparency, accountability, and auditability in healthcare data exchanges. Furthermore, we introduce the concept of tokenizing patient data through NFTs, creating unique digital identifiers for each record, which allows for granular data access controls and proof of data ownership. These NFTs can also be employed to grant access to authorized parties, establishing a secure and transparent data sharing model that empowers both healthcare providers and patients. The proposed approach addresses common privacy concerns by employing privacy-preserving techniques such as zero-knowledge proofs (ZKPs) and homomorphic encryption to ensure that sensitive patient information can be shared without exposing the actual content of the data. This ensures compliance with regulations like HIPAA and GDPR. Additionally, the integration of Fast Healthcare Interoperability Resources (FHIR) with blockchain technology allows for enhanced interoperability, enabling healthcare organizations to exchange data seamlessly and securely across various systems while maintaining data governance and regulatory compliance. Through real-world case studies and simulations, this paper demonstrates how blockchain-based EHR sharing can reduce operational costs, improve patient outcomes, and enhance the security and privacy of healthcare data. This decentralized framework holds great potential for revolutionizing healthcare information exchange, providing a transparent, scalable, and secure method for managing patient data in a highly regulated environment.Keywords: blockchain, electronic health records (ehrs), fast healthcare interoperability resources (fhir), health information exchange (hie), hl7, interoperability, non-fungible tokens (nfts), privacy-preserving techniques, tokens, secure data sharing,
Procedia PDF Downloads 2724782 Formulating Rough Approximations in Information Tables with Possibilistic Information
Authors: Michinori Nakata, Hiroshi Sakai
Abstract:
A rough set, which consists of lower and upper approximations, is formulated in information tables containing possibilistic information. First, lower and upper approximations on the basis of possible world semantics in the same way as Lipski did in the field of incomplete databases are shown in order to clarify fundamentals of rough sets under possibilistic information. Possibility and necessity measures are used, as is done in possibilistic databases. As a result, each object has certain and possible membership degrees to lower and upper approximations, which degrees are the lower and upper bounds. Therefore, the degree that the object belongs to lower and upper approximations is expressed by an interval value. And the complementary property linked with the lower and upper approximations holds, as is valid under complete information. Second, the approach based on indiscernibility relations, which is proposed by Dubois and Prade, are extended in three cases. The first case is that objects used to approximate a set of objects are characterized by possibilistic information. The second case is that objects used to approximate a set of objects with possibilistic information are characterized by complete information. The third case is that objects that are characterized by possibilistic information approximate a set of objects with possibilistic information. The extended approach create the same results as the approach based on possible world semantics. This justifies our extension.Keywords: rough sets, possibilistic information, possible world semantics, indiscernibility relations, lower approximations, upper approximations
Procedia PDF Downloads 32424781 Removal of Heavy Metals from Aqueous Solutions by Low-Cost Materials: A Review
Authors: I. Nazari, B. Shaabani, P. Abaasifar
Abstract:
In small quantities certain heavy metals are nutritionally essential for a healthy life. The heavy metals linked most often to human poisoning are lead, mercury, arsenic, and cadmium. Other heavy metals including copper, zinc and chromium are actually required by the body in small quantity but can also be toxic in large doses. Nowadays, we have contamination to this heavy metals in some untreated industrial waste waters and even in several populated cities drinking waters around the world. The contamination of ground and underground water sources to heavy metals can be concentrated and travel up to food chain by drinking water and agricultural products. In recent years, the need for safe and economical methods for removal of heavy metals from contaminated water has necessitated research interest towards the finding low-cost alternatives. Bio-adsorbents have emerged as low-cost and efficient materials for the removal of heavy metals from waste and ground waters. The bio-adsorbents have an affinity for heavy metals ions to form metal complexes or chelates due to having functional groups including carboxyl, hydroxyl, imidazole, and etc. The objective of this study is to review researches in less expensive adsorbents and their utilization possibilities for various low-cost bio-adsorbents such as coffee beans, rice husk, and saw dust for the removal of heavy metals from contaminated waters.Keywords: heavy metals, water pollution, bio-adsorbents, low cost adsorbents
Procedia PDF Downloads 36224780 The Twin Terminal of Pedestrian Trajectory Based on City Intelligent Model (CIM) 4.0
Authors: Chen Xi, Liu Xuebing, Lao Xueru, Kuan Sinman, Jiang Yike, Wang Hanwei, Yang Xiaolang, Zhou Junjie, Xie Jinpeng
Abstract:
To further promote the development of smart cities, the microscopic "nerve endings" of the City Intelligent Model (CIM) are extended to be more sensitive. In this paper, we develop a pedestrian trajectory twin terminal based on the CIM and CNN technology. It also uses 5G networks, architectural and geoinformatics technologies, convolutional neural networks, combined with deep learning networks for human behavior recognition models, to provide empirical data such as 'pedestrian flow data and human behavioral characteristics data', and ultimately form spatial performance evaluation criteria and spatial performance warning systems, to make the empirical data accurate and intelligent for prediction and decision making.Keywords: urban planning, urban governance, CIM, artificial intelligence, sustainable development
Procedia PDF Downloads 425