Search results for: National database
5839 Application of Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM) Database in Nursing Health Problems with Prostate Cancer-a Pilot Study
Authors: Hung Lin-Zin, Lai Mei-Yen
Abstract:
Prostate cancer is the most commonly diagnosed male cancer in the U.S. The prevalence is around 1 in 8. The etiology of prostate cancer is still unknown, but some predisposing factors, such as age, black race, family history, and obesity, may increase the risk of the disease. In 2020, a total of 7,178 Taiwanese people were nearly diagnosed with prostate cancer, accounting for 5.88% of all cancer cases, and the incidence rate ranked fifth among men. In that year, the total number of deaths from prostate cancer was 1,730, accounting for 3.45% of all cancer deaths, and the death rate ranked 6th among men, accounting for 94.34% of the cases of male reproductive organs. Looking for domestic and foreign literature on the use of OMOP (Observational Medical Outcomes Partnership, hereinafter referred to as OMOP) database analysis, there are currently nearly a hundred literature published related to nursing-related health problems and nursing measures built in the OMOP general data model database of medical institutions are extremely rare. The OMOP common data model construction analysis platform is a system developed by the FDA in 2007, using a common data model (common data model, CDM) to analyze and monitor healthcare data. It is important to build up relevant nursing information from the OMOP- CDM database to assist our daily practice. Therefore, we choose prostate cancer patients who are our popular care objects and use the OMOP- CDM database to explore the common associated health problems. With the assistance of OMOP-CDM database analysis, we can expect early diagnosis and prevention of prostate cancer patients' comorbidities to improve patient care.Keywords: OMOP, nursing diagnosis, health problem, prostate cancer
Procedia PDF Downloads 735838 Managing Education through, Effective School Community Relationships/Participation for National Security
Authors: Shehu S. Janguza
Abstract:
The need for national security cannot be over Emphasis, which should be pursued by any means. Thus the need for effective management of education through effective school community Relationship/participation. In preparing and implementing only effort to promote community involvement in manning Education, it is importance to understand the whole picture of community participation, how it works, what forms are used, what benefit it can yield and what we should expect in the process of carrying out the efforts finally emphasis will be made on how effective school community relationship/participation and lead to national security.Keywords: community participation, managing, school community, national security
Procedia PDF Downloads 5965837 Facial Recognition on the Basis of Facial Fragments
Authors: Tetyana Baydyk, Ernst Kussul, Sandra Bonilla Meza
Abstract:
There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.Keywords: face recognition, labeled faces in the wild (LFW) database, random local descriptor (RLD), random features
Procedia PDF Downloads 3615836 The Effects of Cross-Border Use of Drones in Nigerian National Security
Authors: H. P. Kerry
Abstract:
Drone technology has become a significant discourse in a nation’s national security, while this technology could constitute a danger to national security on the one hand, on the other hand, it is used in developed and developing countries for border security, and in some cases, for protection of security agents and migrants. In the case of Nigeria, drones are used by the military to monitor and tighten security around the borders. However, terrorist groups have devised a means to utilize the technology to their advantage. Therefore, the potential danger in the widespread proliferation of this technology has become a myriad of risks. The research on the effects of cross-border use of drones in Nigerian national security looks at the negative and positive consequences of using drone technology. The study employs the use of interviews and relevant documents to obtain data while the study applied the Just War theory to justify the reason why countries use force; it further buttresses the points with what the realist theory thinks about the use of force. In conclusion, the paper recommends that the Nigerian government through the National Assembly should pass a bill for the establishment of a law that will guide the use of armed and unarmed drones in Nigeria enforced by the Nigeria Civil Aviation Authority and the office of the National Security Adviser.Keywords: armed drones, drones, cross-border, national security
Procedia PDF Downloads 1585835 Multimodal Database of Retina Images for Africa: The First Open Access Digital Repository for Retina Images in Sub Saharan Africa
Authors: Simon Arunga, Teddy Kwaga, Rita Kageni, Michael Gichangi, Nyawira Mwangi, Fred Kagwa, Rogers Mwavu, Amos Baryashaba, Luis F. Nakayama, Katharine Morley, Michael Morley, Leo A. Celi, Jessica Haberer, Celestino Obua
Abstract:
Purpose: The main aim for creating the Multimodal Database of Retinal Images for Africa (MoDRIA) was to provide a publicly available repository of retinal images for responsible researchers to conduct algorithm development in a bid to curb the challenges of ophthalmic artificial intelligence (AI) in Africa. Methods: Data and retina images were ethically sourced from sites in Uganda and Kenya. Data on medical history, visual acuity, ocular examination, blood pressure, and blood sugar were collected. Retina images were captured using fundus cameras (Foru3-nethra and Canon CR-Mark-1). Images were stored on a secure online database. Results: The database consists of 7,859 retinal images in portable network graphics format from 1,988 participants. Images from patients with human immunodeficiency virus were 18.9%, 18.2% of images were from hypertensive patients, 12.8% from diabetic patients, and the rest from normal’ participants. Conclusion: Publicly available data repositories are a valuable asset in the development of AI technology. Therefore, is a need for the expansion of MoDRIA so as to provide larger datasets that are more representative of Sub-Saharan data.Keywords: retina images, MoDRIA, image repository, African database
Procedia PDF Downloads 1295834 Obstacle Classification Method Based on 2D LIDAR Database
Authors: Moohyun Lee, Soojung Hur, Yongwan Park
Abstract:
In this paper is proposed a method uses only LIDAR system to classification an obstacle and determine its type by establishing database for classifying obstacles based on LIDAR. The existing LIDAR system, in determining the recognition of obstruction in an autonomous vehicle, has an advantage in terms of accuracy and shorter recognition time. However, it was difficult to determine the type of obstacle and therefore accurate path planning based on the type of obstacle was not possible. In order to overcome this problem, a method of classifying obstacle type based on existing LIDAR and using the width of obstacle materials was proposed. However, width measurement was not sufficient to improve accuracy. In this research, the width data was used to do the first classification; database for LIDAR intensity data by four major obstacle materials on the road were created; comparison is made to the LIDAR intensity data of actual obstacle materials; and determine the obstacle type by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that data declined in quality in comparison to 3D LIDAR and it was possible to classify obstacle materials using 2D LIDAR.Keywords: obstacle, classification, database, LIDAR, segmentation, intensity
Procedia PDF Downloads 3515833 Gypsum Composites with CDW as Raw Material
Authors: R. Santos Jiménez, A. San-Antonio-González, M. del Río Merino, M. González Cortina, C. Viñas Arrebola
Abstract:
On average, Europe generates around 890 million tons of construction and demolition waste (CDW) per year and only 50% of these CDW are recycled. This is far from the objectives determined in the European Directive for 2020 and aware of this situation, the European Countries are implementing national policies to prevent the waste that can be avoidable and to promote measures to increase recycling and recovering. In Spain, one of these measures has been the development of a CDW recycling guide for the manufacture of mortar, concrete, bricks and lightweight aggregates. However, there is still not enough information on the possibility of incorporating CDW materials in the manufacture of gypsum products. In view of the foregoing, the Universidad Politécnica de Madrid is creating a database with information on the possibility of incorporating CDW materials in the manufacture of gypsum products. The objective of this study is to improve this database by analysing the feasibility of incorporating two different CDW in a gypsum matrix: ceramic waste bricks (perforated brick and double hollow brick), and extruded polystyrene (XPS) waste. Results show that it is possible to incorporate up to 25% of ceramic waste and 4% of XPS waste over the weight of gypsum in a gypsum matrix. Furhtermore, with the addition of ceramic waste an 8% of surface hardness increase and a 25% of capillary water absorption reduction can be obtained. On the other hand, with the addition of XPS, a 26% reduction of density and a 37% improvement of thermal conductivity can be obtained.Keywords: CDW, waste materials, ceramic waste, XPS, construction materials, gypsum
Procedia PDF Downloads 5115832 Aggregate Fluctuations and the Global Network of Input-Output Linkages
Authors: Alexander Hempfing
Abstract:
The desire to understand business cycle fluctuations, trade interdependencies and co-movement has a long tradition in economic thinking. From input-output economics to business cycle theory, researchers aimed to find appropriate answers from an empirical as well as a theoretical perspective. This paper empirically analyses how the production structure of the global economy and several states developed over time, what their distributional properties are and if there are network specific metrics that allow identifying structurally important nodes, on a global, national and sectoral scale. For this, the World Input-Output Database was used, and different statistical methods were applied. Empirical evidence is provided that the importance of the Eastern hemisphere in the global production network has increased significantly between 2000 and 2014. Moreover, it was possible to show that the sectoral eigenvector centrality indices on a global level are power-law distributed, providing evidence that specific national sectors exist which are more critical to the world economy than others while serving as a hub within the global production network. However, further findings suggest, that global production cannot be characterized as a scale-free network.Keywords: economic integration, industrial organization, input-output economics, network economics, production networks
Procedia PDF Downloads 2795831 Client Hacked Server
Authors: Bagul Abhijeet
Abstract:
Background: Client-Server model is the backbone of today’s internet communication. In which normal user can not have control over particular website or server? By using the same processing model one can have unauthorized access to particular server. In this paper, we discussed about application scenario of hacking for simple website or server consist of unauthorized way to access the server database. This application emerges to autonomously take direct access of simple website or server and retrieve all essential information maintain by administrator. In this system, IP address of server given as input to retrieve user-id and password of server. This leads to breaking administrative security of server and acquires the control of server database. Whereas virus helps to escape from server security by crashing the whole server. Objective: To control malicious attack and preventing all government website, and also find out illegal work to do hackers activity. Results: After implementing different hacking as well as non-hacking techniques, this system hacks simple web sites with normal security credentials. It provides access to server database and allow attacker to perform database operations from client machine. Above Figure shows the experimental result of this application upon different servers and provides satisfactory results as required. Conclusion: In this paper, we have presented a to view to hack the server which include some hacking as well as non-hacking methods. These algorithms and methods provide efficient way to hack server database. By breaking the network security allow to introduce new and better security framework. The terms “Hacking” not only consider for its illegal activities but also it should be use for strengthen our global network.Keywords: Hacking, Vulnerabilities, Dummy request, Virus, Server monitoring
Procedia PDF Downloads 2525830 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems
Authors: Samuel Kaspi, Sitalakshmi Venkatraman
Abstract:
In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.Keywords: in-memory database, disk-based system, hybrid database, concurrency control
Procedia PDF Downloads 4205829 Challenges, Chances and Possibilities during the Change Management Process of the National Defence Academy Vienna
Authors: Georg Ebner
Abstract:
The National Defence Academy, an element of the Austrian Ministry of Defence, is undergoing a transition process leading the Academy towards a new target structure that is currently being developed. In so doing, in addition to a subject-oriented approach, also an employee-oriented process was introduced. This process was initiated by the Ministry of Defence and should lead the National Defence Academy into a new constellation. During this process, the National Defence Academy worked in very special adapted World Café sessions. The “change manager” dealed with very different issues. They took the data feedback from the sessions and prepared with the feedback and information from the guidance the next session. So they got various information and a very different picture around the academy. It was very helpful to involve most of the employees of the academy during this process and to take their knowledge and wisdom. The process himself started with very different feelings and ended with great consent. A very interesting part of this process was also that the commander and his deputy worked together during all of this sessions and they answered all questions from the employees in time. The adapted World Café phases were necessary to deal with the information of the staff and to implement these absolutely needful data into this process. In cooperation with the responsible Headquarters, the first items resulting from the World Café phases could already be fed back to the employees and be implemented. The staff-oriented process is currently supported via a point of contact, through which the staff can contribute ideas as well, but also by the active information policy on the part of the Headquarters. The described change process makes innovative innovations possible. So far, in the event of change processes staff members have been entrusted only with the concrete implementation plan and tied into the process when the respective workplaces were to be re-staffed. The procedure described here can be seen as food-for-thought for further change processes. The findings of this process are that a staff oriented process can lead an organisation into a new era of thinking and working. This process has shown, that a lot of innovative ideas can also take place in a ministry. This process can be a background for a lot of change management processes in ministries and governmental and non-governmental organisations.Keywords: both directions approach, change management, knowledge database, transformation process, World Cafe
Procedia PDF Downloads 1945828 Perusing the Influence of a Visual Editor in Enabling PostgreSQL Query Learn-Ability
Authors: Manuela Nayantara Jeyaraj
Abstract:
PostgreSQL is an Object-Relational Database Management System (ORDBMS) with an architecture that ensures optimal quality data management. But due to the shading growth of similar ORDBMS, PostgreSQL has not been renowned among the database user community. Despite having its features and in-built functionalities shadowed, PostgreSQL renders a vast range of utilities for data manipulation and hence calling for it to be upheld more among users. But introducing PostgreSQL in order to stimulate its advantageous features among users, mandates endorsing learn-ability as an add-on as the target groups considered consist of both amateur as well as professional PostgreSQL users. The scope of this paper deliberates providing easy contemplation of query formulations and flows through a visual editor designed according to user interface principles that standby to support every aspect of making PostgreSQL learn-able by self-operation and creation of queries within the visual editor. This paper tends to scrutinize the importance of choosing PostgreSQL as the working database environment, the visual perspectives that influence human behaviour and ultimately learning, the modes in which learn-ability can be provided via visualization and the advantages reaped by the implementation of the proposed system features.Keywords: database, learn-ability, PostgreSQL, query, visual-editor
Procedia PDF Downloads 1745827 A Cloud-Based Spectrum Database Approach for Licensed Shared Spectrum Access
Authors: Hazem Abd El Megeed, Mohamed El-Refaay, Norhan Magdi Osman
Abstract:
Spectrum scarcity is a challenging obstacle in wireless communications systems. It hinders the introduction of innovative wireless services and technologies that require larger bandwidth comparing to legacy technologies. In addition, the current worldwide allocation of radio spectrum bands is already congested and can not afford additional squeezing or optimization to accommodate new wireless technologies. This challenge is a result of accumulative contributions from different factors that will be discussed later in this paper. One of these factors is the radio spectrum allocation policy governed by national regulatory authorities nowadays. The framework for this policy allocates specified portion of radio spectrum to a particular wireless service provider on exclusive utilization basis. This allocation is executed according to technical specification determined by the standard bodies of each Radio Access Technology (RAT). Dynamic access of spectrum is a framework for flexible utilization of radio spectrum resources. In this framework there is no exclusive allocation of radio spectrum and even the public safety agencies can share their spectrum bands according to a governing policy and service level agreements. In this paper, we explore different methods for accessing the spectrum dynamically and its associated implementation challenges.Keywords: licensed shared access, cognitive radio, spectrum sharing, spectrum congestion, dynamic spectrum access, spectrum database, spectrum trading, reconfigurable radio systems, opportunistic spectrum allocation (OSA)
Procedia PDF Downloads 4335826 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan
Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid
Abstract:
In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey
Procedia PDF Downloads 1655825 Cluster Analysis of Students’ Learning Satisfaction
Authors: Purevdolgor Luvsantseren, Ajnai Luvsan-Ish, Oyuntsetseg Sandag, Javzmaa Tsend, Akhit Tileubai, Baasandorj Chilhaasuren, Jargalbat Puntsagdash, Galbadrakh Chuluunbaatar
Abstract:
One of the indicators of the quality of university services is student satisfaction. Aim: We aimed to study the level of satisfaction of students in the first year of premedical courses in the course of Medical Physics using the cluster method. Materials and Methods: In the framework of this goal, a questionnaire was collected from a total of 324 students who studied the medical physics course of the 1st course of the premedical course at the Mongolian National University of Medical Sciences. When determining the level of satisfaction, the answers were obtained on five levels of satisfaction: "excellent", "good", "medium", "bad" and "very bad". A total of 39 questionnaires were collected from students: 8 for course evaluation, 19 for teacher evaluation, and 12 for student evaluation. From the research, a database with 39 fields and 324 records was created. Results: In this database, cluster analysis was performed in MATLAB and R programs using the k-means method of data mining. Calculated the Hopkins statistic in the created database, the values are 0.88, 0.87, and 0.97. This shows that cluster analysis methods can be used. The course evaluation sub-fund is divided into three clusters. Among them, cluster I has 150 objects with a "good" rating of 46.2%, cluster II has 119 objects with a "medium" rating of 36.7%, and Cluster III has 54 objects with a "good" rating of 16.6%. The teacher evaluation sub-base into three clusters, there are 179 objects with a "good" rating of 55.2% in cluster II, 108 objects with an "average" rating of 33.3% in cluster III, and 36 objects with an "excellent" rating in cluster I of 11.1%. The sub-base of student evaluations is divided into two clusters: cluster II has 215 objects with an "excellent" rating of 66.3%, and cluster I has 108 objects with an "excellent" rating of 33.3%. Evaluating the resulting clusters with the Silhouette coefficient, 0.32 for the course evaluation cluster, 0.31 for the teacher evaluation cluster, and 0.30 for student evaluation show statistical significance. Conclusion: Finally, to conclude, cluster analysis in the model of the medical physics lesson “good” - 46.2%, “middle” - 36.7%, “bad” - 16.6%; 55.2% - “good”, 33.3% - “middle”, 11.1% - “bad” in the teacher evaluation model; 66.3% - “good” and 33.3% of “bad” in the student evaluation model.Keywords: questionnaire, data mining, k-means method, silhouette coefficient
Procedia PDF Downloads 515824 The Relationship between the Feeling of Distributive Justice and National Identity of the Youth
Authors: Leila Batmany
Abstract:
This research studies the relationship between the feeling of distributive justice and national identity of the youth. The present analysis intends to experimentally investigate the various dimensions of the justice feeling and its effect on the national identity components. The study has taken justice into consideration from four different points of view on the basis of availability of valuable social sources such as power, wealth, knowledge and status in the political, economic, and cultural and status justice respectively. Furthermore, the national identity has been considered as the feeling of honour, attachment and commitment towards national society and its seven components i.e. history, language, culture, political system, religion, geographical territory and society. The 'field study' has been used as the method for the research with the individual as unit, taking 368 young between the age of 18 and 29 living in Tehran, chosen randomly according to Cochran formula. The individual samples have been randomly chosen among five districts in north, south, west, east, and centre of Tehran, based on the multistage cluster sampling. The data collection has been performed with the use of questionnaire and interview. The most important results are as follows: i) The feeling of economic justice is the weakest one among the youth. ii) The strongest and the weakest dimensions of the national identity are, respectively, the historical and the social dimension. iii) There is a positive and meaningful relationship between the feeling political and statues justice and then national identity, whereas no meaningful relationship exists between the economic and cultural justice and the national identity. iv) There is a positive and meaningful relationship between the feeling of justice in all dimensions and legitimacy of the political system. There is also such a relationship between the legitimacy of the political system and national identity. v) Generally, there is a positive and meaningful relationship between the feeling of distributive justice and national identity among the youth. vi) It is through the legitimacy of the political system that justice feeling can have an influence on the national identity.Keywords: distributive justice, national identity, legitimacy of political system, Cochran formula, multistage cluster sampling
Procedia PDF Downloads 1355823 Using Hyperspectral Camera and Deep Learning to Identify the Ripeness of Sugar Apples
Authors: Kuo-Dung Chiou, Yen-Xue Chen, Chia-Ying Chang
Abstract:
This study uses AI technology to establish an expert system and establish a fruit appearance database for pineapples and custard apples. It collects images based on appearance defects and fruit maturity. It uses deep learning to detect the location of the fruit and can detect the appearance of the fruit in real-time. Flaws and maturity. In addition, a hyperspectral camera was used to scan pineapples and custard apples, and the light reflection at different frequency bands was used to find the key frequency band for pectin softening in post-ripe fruits. Conducted a large number of multispectral image collection and data analysis to establish a database of Pineapple Custard Apple and Big Eyed Custard Apple, which includes a high-definition color image database, a hyperspectral database in the 377~1020 nm frequency band, and five frequency band images (450, 500, 670, 720, 800nm) multispectral database, which collects 4896 images and manually labeled ground truth; 26 hyperspectral pineapple custard apple fruits (520 images each); multispectral custard apple 168 fruits (5 images each). Using the color image database to train deep learning Yolo v4's pre-training network architecture and adding the training weights established by the fruit database, real-time detection performance is achieved, and the recognition rate reaches over 97.96%. We also used multispectral to take a large number of continuous shots and calculated the difference and average ratio of the fruit in the 670 and 720nm frequency bands. They all have the same trend. The value increases until maturity, and the value will decrease after maturity. Subsequently, the sub-bands will be added to analyze further the numerical analysis of sugar content and moisture, and the absolute value of maturity and the data curve of maturity will be found.Keywords: hyperspectral image, fruit firmness, deep learning, automatic detection, automatic measurement, intelligent labor saving
Procedia PDF Downloads 35822 Using Deep Learning in Lyme Disease Diagnosis
Authors: Teja Koduru
Abstract:
Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease.Keywords: Lyme, untreated Lyme, erythema migrans rash, EM rash
Procedia PDF Downloads 2425821 Relevance of History to National Development
Authors: Abdulsalami Muyideen Deji
Abstract:
Achievement of one age serves as a starting point for the next generation. History explains the significance of past and present achievement which serves a guide principle for great minds to determine the next line of action in personal life which translate to national development. If history does this in human life, it is not out of place to accept history as a vanguard of national development. History remained the only relevant discipline which shapes the affairs of developed society. It gives adequate knowledge of great people in any society, how they used their ability and leadership prowess to develop their environment. As a result of this people use the idea of those heroes as guiding principle to determine the present issues. The custodian of identity is history, while identity builds confidence in man; it also makes man to master his environment for rapid development. Adequate developments of man’s environment translate to national development.Keywords: history, national development, leadership prowess, identity
Procedia PDF Downloads 4005820 Static Analysis of Security Issues of the Python Packages Ecosystem
Authors: Adam Gorine, Faten Spondon
Abstract:
Python is considered the most popular programming language and offers its own ecosystem for archiving and maintaining open-source software packages. This system is called the python package index (PyPI), the repository of this programming language. Unfortunately, one-third of these software packages have vulnerabilities that allow attackers to execute code automatically when a vulnerable or malicious package is installed. This paper contributes to large-scale empirical studies investigating security issues in the python ecosystem by evaluating package vulnerabilities. These provide a series of implications that can help the security of software ecosystems by improving the process of discovering, fixing, and managing package vulnerabilities. The vulnerable dataset is generated using the NVD, the national vulnerability database, and the Snyk vulnerability dataset. In addition, we evaluated 807 vulnerability reports in the NVD and 3900 publicly known security vulnerabilities in Python Package Manager (pip) from the Snyk database from 2002 to 2022. As a result, many Python vulnerabilities appear in high severity, followed by medium severity. The most problematic areas have been improper input validation and denial of service attacks. A hybrid scanning tool that combines the three scanners bandit, snyk and dlint, which provide a clear report of the code vulnerability, is also described.Keywords: Python vulnerabilities, bandit, Snyk, Dlint, Python package index, ecosystem, static analysis, malicious attacks
Procedia PDF Downloads 1405819 Creating Database and Building 3D Geological Models: A Case Study on Bac Ai Pumped Storage Hydropower Project
Authors: Nguyen Chi Quang, Nguyen Duong Tri Nguyen
Abstract:
This article is the first step to research and outline the structure of the geotechnical database in the geological survey of a power project; in the context of this report creating the database that has been carried out for the Bac Ai pumped storage hydropower project. For the purpose of providing a method of organizing and storing geological and topographic survey data and experimental results in a spatial database, the RockWorks software is used to bring optimal efficiency in the process of exploiting, using, and analyzing data in service of the design work in the power engineering consulting. Three-dimensional (3D) geotechnical models are created from the survey data: such as stratigraphy, lithology, porosity, etc. The results of the 3D geotechnical model in the case of Bac Ai pumped storage hydropower project include six closely stacked stratigraphic formations by Horizons method, whereas modeling of engineering geological parameters is performed by geostatistical methods. The accuracy and reliability assessments are tested through error statistics, empirical evaluation, and expert methods. The three-dimensional model analysis allows better visualization of volumetric calculations, excavation and backfilling of the lake area, tunneling of power pipelines, and calculation of on-site construction material reserves. In general, the application of engineering geological modeling makes the design work more intuitive and comprehensive, helping construction designers better identify and offer the most optimal design solutions for the project. The database always ensures the update and synchronization, as well as enables 3D modeling of geological and topographic data to integrate with the designed data according to the building information modeling. This is also the base platform for BIM & GIS integration.Keywords: database, engineering geology, 3D Model, RockWorks, Bac Ai pumped storage hydropower project
Procedia PDF Downloads 1695818 Development of an Asset Database to Enhance the Circular Business Models for the European Solar Industry: A Design Science Research Approach
Authors: Ässia Boukhatmi, Roger Nyffenegger
Abstract:
The expansion of solar energy as a means to address the climate crisis is undisputed, but the increasing number of new photovoltaic (PV) modules being put on the market is simultaneously leading to increased challenges in terms of managing the growing waste stream. Many of the discarded modules are still fully functional but are often damaged by improper handling after disassembly or not properly tested to be considered for a second life. In addition, the collection rate for dismantled PV modules in several European countries is only a fraction of previous projections, partly due to the increased number of illegal exports. The underlying problem for those market imperfections is an insufficient data exchange between the different actors along the PV value chain, as well as the limited traceability of PV panels during their lifetime. As part of the Horizon 2020 project CIRCUSOL, an asset database prototype was developed to tackle the described problems. In an iterative process applying the design science research methodology, different business models, as well as the technical implementation of the database, were established and evaluated. To explore the requirements of different stakeholders for the development of the database, surveys and in-depth interviews were conducted with various representatives of the solar industry. The proposed database prototype maps the entire value chain of PV modules, beginning with the digital product passport, which provides information about materials and components contained in every module. Product-related information can then be expanded with performance data of existing installations. This information forms the basis for the application of data analysis methods to forecast the appropriate end-of-life strategy, as well as the circular economy potential of PV modules, already before they arrive at the recycling facility. The database prototype could already be enriched with data from different data sources along the value chain. From a business model perspective, the database offers opportunities both in the area of reuse as well as with regard to the certification of sustainable modules. Here, participating actors have the opportunity to differentiate their business and exploit new revenue streams. Future research can apply this approach to further industry and product sectors, validate the database prototype in a practical context, and can serve as a basis for standardization efforts to strengthen the circular economy.Keywords: business model, circular economy, database, design science research, solar industry
Procedia PDF Downloads 1295817 Steps towards the Development of National Health Data Standards in Developing Countries
Authors: Abdullah I. Alkraiji, Thomas W. Jackson, Ian Murray
Abstract:
The proliferation of health data standards today is somewhat overlapping and conflicting, resulting in market confusion and leading to increasing proprietary interests. The government role and support in standardization for health data are thought to be crucial in order to establish credible standards for the next decade, to maximize interoperability across the health sector, and to decrease the risks associated with the implementation of non-standard systems. The normative literature missed out the exploration of the different steps required to be undertaken by the government towards the development of national health data standards. Based on the lessons learned from a qualitative study investigating the different issues to the adoption of health data standards in the major tertiary hospitals in Saudi Arabia and the opinions and feedback from different experts in the areas of data exchange and standards and medical informatics in Saudi Arabia and UK, a list of steps required towards the development of national health data standards was constructed. Main steps are the existence of: a national formal reference for health data standards, an agreed national strategic direction for medical data exchange, a national medical information management plan and a national accreditation body, and more important is the change management at the national and organizational level. The outcome of this study can be used by academics and practitioners to develop the planning of health data standards, and in particular those in developing countries.Keywords: interoperabilty, medical data exchange, health data standards, case study, Saudi Arabia
Procedia PDF Downloads 3405816 Investigating Real Ship Accidents with Descriptive Analysis in Turkey
Authors: İsmail Karaca, Ömer Söner
Abstract:
The use of advanced methods has been increasing day by day in the maritime sector, which is one of the sectors least affected by the COVID-19 pandemic. It is aimed to minimize accidents, especially by using advanced methods in the investigation of marine accidents. This research aimed to conduct an exploratory statistical analysis of particular ship accidents in the Transport Safety Investigation Center of Turkey database. 46 ship accidents, which occurred between 2010-2018, have been selected from the database. In addition to the availability of a reliable and comprehensive database, taking advantage of the robust statistical models for investigation is critical to improving the safety of ships. Thus, descriptive analysis has been used in the research to identify causes and conditional factors related to different types of ship accidents. The research outcomes underline the fact that environmental factors and day and night ratio have great influence on ship safety.Keywords: descriptive analysis, maritime industry, maritime safety, ship accident statistics
Procedia PDF Downloads 1395815 Comparison between RILM, JSTOR, and WorldCat Used to Search for Secondary Literature
Authors: Stacy Jarvis
Abstract:
Databases such as JSTOR, RILM and WorldCat have been the main source and storage of literature in the music orb. The Reference Index to Music Literature is a bibliographic database of over 2.6 million citations to writings about music from over 70 countries. The Research Institute produces RILM for the Study of Music at the University of Buffalo. JSTOR is an e-library of academic journals, books, and primary sources. Database JSTOR helps scholars find, utilise, and build upon a vast range of literature through a powerful teaching and research platform. Another database, WorldCat, is the world's biggest library catalogue, assisting scholars in finding library materials online. An evaluation of these databases in the music sphere is conducted by looking into the description and intended use and finding similarities and differences among them. Through comparison, it is found that these aim to serve different purposes, though they have the same goal of providing and storing literature. Also, since each database has different parts of literature that it majors on, the intended use of the three databases is evaluated. This can be found in the description, scope, and intended uses section. These areas are crucial to the research as it addresses the functional or literature differences among the three databases. It is also found that these databases have different quantitative potentials. This is determined by addressing the year each database began collecting literature and the number of articles, periodicals, albums, conference proceedings, music, dissertations, digital media, essays collections, journal articles, monographs, online resources, reviews, and reference materials that can be found in each one of them. This can be found in the sections- description, scope and intended uses and the importance of the database in identifying literature on different topics. To compare the delivery of services to the users, the importance of databases in identifying literature on different topics is also addressed in the section -the importance of databases in identifying literature on different topics. Even though these databases are used in research, they all have disadvantages and advantages. This is addressed in the sections on advantages and disadvantages. This will be significant in determining which of the three is the best. Also, it will help address how the shortcomings of one database can be addressed by utilising two databases together while conducting research. It is addressed in the section- a combination of RILM and JSTOR. All this information revolves around the idea that a huge amount of quantitative and qualitative data can be found in the presented databases on music and digital content; however, each of the given databases has a different construction and material features contributing to the musical scholarship in its way.Keywords: RILM, JSTOR, WorldCat, database, literature, research
Procedia PDF Downloads 835814 Exploring Relationship of National Talent Retention and National Value Proposition
Authors: Dzul Fahmi Md. Nordin, Rosmini Omar
Abstract:
This conceptual paper aims to explore the concept of National Talent Retention for a nation by extending the works on Talent Retention in organizations to the scope of nations. The objective of this paper is to explore the relationship of National Talent Retention as the dependent variable with the three explored value propositions namely Firm Value Proposition, Higher Education and Training Value Proposition and National Attractiveness Value Proposition as the independent variables. Life Satisfaction is introduced in this study as a moderating variable to explore possibilities of Life Satisfaction as a mediator for the relationship between National Value Proposition and National Talent Retention. Theories such as Migration, Value Propositions, Life Satisfaction, Human Resource Management and Resource Based View are referred to in order to understand and explore the concept of National Talent Retention. Malaysia is chosen as the background of this study since Malaysia represents a developing nation with progressive economic, education and national policy which presents an interesting background for this exploratory paper. Surprisingly, Malaysia is still facing the phenomenon of Brain Drain which if not handled properly will hinder its Vision 2020 to progress a fully developed nation by year 2020. Mixed methodology analysis is proposed in this paper to include both qualitative face-to-face interview as well as quantitative survey questionnaire to study on the value proposition factors explored. Target respondents are strictly confined to Malaysia’s local high skilled talents either residing in Malaysia or migrated abroad since this paper is mainly interested to study on the concept of National Talent Retention and how successful Malaysia is projecting its value propositions from the perception of high skilled talent Malaysians. It is hoped that this paper could contribute towards understanding National Talent Retention concept where, the model could be replicated to identify influential factors specific to other nations.Keywords: national talent retention, national value proposition, life satisfaction, high skilled talents
Procedia PDF Downloads 4035813 Transcriptome Analysis of Saffron (crocus sativus L.) Stigma Focusing on Identification Genes Involved in the Biosynthesis of Crocin
Authors: Parvaneh Mahmoudi, Ahmad Moeni, Seyed Mojtaba Khayam Nekoei, Mohsen Mardi, Mehrshad Zeinolabedini, Ghasem Hosseini Salekdeh
Abstract:
Saffron (Crocus sativus L.) is one of the most important spice and medicinal plants. The three-branch style of C. sativus flowers are the most important economic part of the plant and known as saffron, which has several medicinal properties. Despite the economic and biological significance of this plant, knowledge about its molecular characteristics is very limited. In the present study, we, for the first time, constructed a comprehensive dataset for C. sativus stigma through de novo transcriptome sequencing. We performed de novo transcriptome sequencing of C. sativus stigma using the Illumina paired-end sequencing technology. A total of 52075128 reads were generated and assembled into 118075 unigenes, with an average length of 629 bp and an N50 of 951 bp. A total of 66171unigenes were identified, among them, 66171 (56%) were annotated in the non-redundant National Center for Biotechnology Information (NCBI) database, 30938 (26%) were annotated in the Swiss-Prot database, 10273 (8.7%) unigenes were mapped to 141 Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database, while 52560 (44%) and 40756 (34%) unigenes were assigned to Gen Ontology (GO) categories and Eukaryotic Orthologous Groups of proteins (KOG), respectively. In addition, 65 candidate genes involved in three stages of crocin biosynthesis were identified. Finally, transcriptome sequencing of saffron stigma was used to identify 6779 potential microsatellites (SSRs) molecular markers. High-throughput de novo transcriptome sequencing provided a valuable resource of transcript sequences of C. sativus in public databases. In addition, most of candidate genes potentially involved in crocin biosynthesis were identified which could be further utilized in functional genomics studies. Furthermore, numerous obtained SSRs might contribute to address open questions about the origin of this amphiploid spices with probable little genetic diversity.Keywords: saffron, transcriptome, NGS, bioinformatic
Procedia PDF Downloads 1025812 Comparison Analysis of Science and Technology Council between Korea, USA, and Japan
Authors: Daekook Kang, Wooseok Jang, Jeonghwan Jeon
Abstract:
As Korea government has expanded the budget for the national research and development business, the need for the installation of institute taking a role of deliberation, coordination, and operation of research development business and its budget has been increased continuously. In response to the demands of the times, recently, the National Science & Technology Council (NSTC) was installed. However, to achieve a creative economy more efficiently, the fundamental introspection on the current state of the national administration system of science and technology in Korea should be needed. Accordingly, this study, firstly, analyzes the function and organizational structure of NSTC in Korea. Then, this study investigates the current state of the National Science and Technology Council in main world countries. Lastly, this study derives some implications based on the comparison analysis of the current state of the National Science and Technology Council between Korea and these countries. The present study will help in finding the way for the advancement of the NSTC in Korea.Keywords: Comparison Analysis of Science & Technology Council (NSTC), CSTP, National Science & Technology Council in Korea, operating system of NSTC
Procedia PDF Downloads 4295811 An Optimized Association Rule Mining Algorithm
Authors: Archana Singh, Jyoti Agarwal, Ajay Rana
Abstract:
Data Mining is an efficient technology to discover patterns in large databases. Association Rule Mining techniques are used to find the correlation between the various item sets in a database, and this co-relation between various item sets are used in decision making and pattern analysis. In recent years, the problem of finding association rules from large datasets has been proposed by many researchers. Various research papers on association rule mining (ARM) are studied and analyzed first to understand the existing algorithms. Apriori algorithm is the basic ARM algorithm, but it requires so many database scans. In DIC algorithm, less amount of database scan is needed but complex data structure lattice is used. The main focus of this paper is to propose a new optimized algorithm (Friendly Algorithm) and compare its performance with the existing algorithms A data set is used to find out frequent itemsets and association rules with the help of existing and proposed (Friendly Algorithm) and it has been observed that the proposed algorithm also finds all the frequent itemsets and essential association rules from databases as compared to existing algorithms in less amount of database scan. In the proposed algorithm, an optimized data structure is used i.e. Graph and Adjacency Matrix.Keywords: association rules, data mining, dynamic item set counting, FP-growth, friendly algorithm, graph
Procedia PDF Downloads 4225810 Standard Languages for Creating a Database to Display Financial Statements on a Web Application
Authors: Vladimir Simovic, Matija Varga, Predrag Oreski
Abstract:
XHTML and XBRL are the standard languages for creating a database for the purpose of displaying financial statements on web applications. Today, XBRL is one of the most popular languages for business reporting. A large number of countries in the world recognize the role of XBRL language for financial reporting and the benefits that the reporting format provides in the collection, analysis, preparation, publication and the exchange of data (information) which is the positive side of this language. Here we present all advantages and opportunities that a company may have by using the XBRL format for business reporting. Also, this paper presents XBRL and other languages that are used for creating the database, such XML, XHTML, etc. The role of the AJAX complex model and technology will be explained in detail, and during the exchange of financial data between the web client and web server. Here will be mentioned basic layers of the network for data exchange via the web.Keywords: XHTML, XBRL, XML, JavaScript, AJAX technology, data exchange
Procedia PDF Downloads 394