Search results for: database forensics
1621 Client Hacked Server
Authors: Bagul Abhijeet
Abstract:
Background: Client-Server model is the backbone of today’s internet communication. In which normal user can not have control over particular website or server? By using the same processing model one can have unauthorized access to particular server. In this paper, we discussed about application scenario of hacking for simple website or server consist of unauthorized way to access the server database. This application emerges to autonomously take direct access of simple website or server and retrieve all essential information maintain by administrator. In this system, IP address of server given as input to retrieve user-id and password of server. This leads to breaking administrative security of server and acquires the control of server database. Whereas virus helps to escape from server security by crashing the whole server. Objective: To control malicious attack and preventing all government website, and also find out illegal work to do hackers activity. Results: After implementing different hacking as well as non-hacking techniques, this system hacks simple web sites with normal security credentials. It provides access to server database and allow attacker to perform database operations from client machine. Above Figure shows the experimental result of this application upon different servers and provides satisfactory results as required. Conclusion: In this paper, we have presented a to view to hack the server which include some hacking as well as non-hacking methods. These algorithms and methods provide efficient way to hack server database. By breaking the network security allow to introduce new and better security framework. The terms “Hacking” not only consider for its illegal activities but also it should be use for strengthen our global network.Keywords: Hacking, Vulnerabilities, Dummy request, Virus, Server monitoring
Procedia PDF Downloads 2541620 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems
Authors: Samuel Kaspi, Sitalakshmi Venkatraman
Abstract:
In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.Keywords: in-memory database, disk-based system, hybrid database, concurrency control
Procedia PDF Downloads 4211619 Perusing the Influence of a Visual Editor in Enabling PostgreSQL Query Learn-Ability
Authors: Manuela Nayantara Jeyaraj
Abstract:
PostgreSQL is an Object-Relational Database Management System (ORDBMS) with an architecture that ensures optimal quality data management. But due to the shading growth of similar ORDBMS, PostgreSQL has not been renowned among the database user community. Despite having its features and in-built functionalities shadowed, PostgreSQL renders a vast range of utilities for data manipulation and hence calling for it to be upheld more among users. But introducing PostgreSQL in order to stimulate its advantageous features among users, mandates endorsing learn-ability as an add-on as the target groups considered consist of both amateur as well as professional PostgreSQL users. The scope of this paper deliberates providing easy contemplation of query formulations and flows through a visual editor designed according to user interface principles that standby to support every aspect of making PostgreSQL learn-able by self-operation and creation of queries within the visual editor. This paper tends to scrutinize the importance of choosing PostgreSQL as the working database environment, the visual perspectives that influence human behaviour and ultimately learning, the modes in which learn-ability can be provided via visualization and the advantages reaped by the implementation of the proposed system features.Keywords: database, learn-ability, PostgreSQL, query, visual-editor
Procedia PDF Downloads 1741618 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan
Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid
Abstract:
In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey
Procedia PDF Downloads 1661617 Medical and Surgical Nursing Care
Authors: Nassim Salmi
Abstract:
Postoperative mobilization is an important part of fundamental care. Increased mobilization has a positive effect on recovery, but immobilization is still a challenge in postoperative care. Aims: To report how the establishment of a national nursing database was used to measure postoperative mobilization in patients undergoing surgery for ovarian cancer. Mobilization was defined as at least 3 hours out of bed on postoperative day 1, with the goal set at achieving this in 60% of patients. Clinical nurses on 4400 patients with ovarian cancer performed data entry. Findings: 46.7% of patients met the goal for mobilization on the first postoperative day, but variations in duration and type of mobilization were observed. Of those mobilized, 51.8% had been walking in the hallway. A national nursing database creates opportunities to optimize fundamental care. By comparing nursing data with oncological, surgical, and pathology data, it became possible to study mobilization in relation to cancer stage, comorbidity, treatment, and extent of surgery.Keywords: postoperative care, gynecology, nursing documentation, database
Procedia PDF Downloads 1181616 Using Hyperspectral Camera and Deep Learning to Identify the Ripeness of Sugar Apples
Authors: Kuo-Dung Chiou, Yen-Xue Chen, Chia-Ying Chang
Abstract:
This study uses AI technology to establish an expert system and establish a fruit appearance database for pineapples and custard apples. It collects images based on appearance defects and fruit maturity. It uses deep learning to detect the location of the fruit and can detect the appearance of the fruit in real-time. Flaws and maturity. In addition, a hyperspectral camera was used to scan pineapples and custard apples, and the light reflection at different frequency bands was used to find the key frequency band for pectin softening in post-ripe fruits. Conducted a large number of multispectral image collection and data analysis to establish a database of Pineapple Custard Apple and Big Eyed Custard Apple, which includes a high-definition color image database, a hyperspectral database in the 377~1020 nm frequency band, and five frequency band images (450, 500, 670, 720, 800nm) multispectral database, which collects 4896 images and manually labeled ground truth; 26 hyperspectral pineapple custard apple fruits (520 images each); multispectral custard apple 168 fruits (5 images each). Using the color image database to train deep learning Yolo v4's pre-training network architecture and adding the training weights established by the fruit database, real-time detection performance is achieved, and the recognition rate reaches over 97.96%. We also used multispectral to take a large number of continuous shots and calculated the difference and average ratio of the fruit in the 670 and 720nm frequency bands. They all have the same trend. The value increases until maturity, and the value will decrease after maturity. Subsequently, the sub-bands will be added to analyze further the numerical analysis of sugar content and moisture, and the absolute value of maturity and the data curve of maturity will be found.Keywords: hyperspectral image, fruit firmness, deep learning, automatic detection, automatic measurement, intelligent labor saving
Procedia PDF Downloads 51615 Using Deep Learning in Lyme Disease Diagnosis
Authors: Teja Koduru
Abstract:
Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease.Keywords: Lyme, untreated Lyme, erythema migrans rash, EM rash
Procedia PDF Downloads 2421614 The Role of Digital Technology in Crime Prevention: A Case Study of Cellular Forensics Unit, Capital City Police Peshawar
Authors: Muhammad Ashfaq
Abstract:
Main theme: This prime focus of this study is on the role of digital technology in crime prevention, with special focus on Cellular Forensic Unit, Capital City Police Peshawar-Khyber Pakhtunkhwa-Pakistan. Objective(s) of the study: The prime objective of this study is to provide statistics, strategies, and pattern of analysis used for crime prevention in Cellular Forensic Unit of Capital City Police Peshawar, Khyber Pakhtunkhwa-Pakistan. Research Method and Procedure: Qualitative method of research has been used in the study for obtaining secondary data from research wing and Information Technology (IT) section of Peshawar police. Content analysis was the method used for the conduction of the study. This study is delimited to Capital City Police and Cellular Forensic Unit Peshawar-KP, Pakistan. information technologies. Major finding(s): It is evident that the old traditional approach will never provide solutions for better management in controlling crimes. The best way to control crimes and promotion of proactive policing is to adopt new technologies. The study reveals that technology have transformed police more effective and vigilant as compared to traditional policing. The heinous crimes like abduction, missing of an individual, snatching, burglaries, and blind murder cases are now traceable with the help of technology. Recommendation(s): From the analysis of the data, it is reflected that Information Technology (IT) expert should be recruited along with research analyst to timely assist and facilitate operational as well as investigation units of police. A mobile locator should be Provided to Cellular Forensic Unit to timely apprehend the criminals. Latest digital analysis software should be provided to equip the Cellular Forensic Unit.Keywords: criminology-pakistan, crime prevention-KP, digital forensics, digital technology-pakistan
Procedia PDF Downloads 981613 Creating Database and Building 3D Geological Models: A Case Study on Bac Ai Pumped Storage Hydropower Project
Authors: Nguyen Chi Quang, Nguyen Duong Tri Nguyen
Abstract:
This article is the first step to research and outline the structure of the geotechnical database in the geological survey of a power project; in the context of this report creating the database that has been carried out for the Bac Ai pumped storage hydropower project. For the purpose of providing a method of organizing and storing geological and topographic survey data and experimental results in a spatial database, the RockWorks software is used to bring optimal efficiency in the process of exploiting, using, and analyzing data in service of the design work in the power engineering consulting. Three-dimensional (3D) geotechnical models are created from the survey data: such as stratigraphy, lithology, porosity, etc. The results of the 3D geotechnical model in the case of Bac Ai pumped storage hydropower project include six closely stacked stratigraphic formations by Horizons method, whereas modeling of engineering geological parameters is performed by geostatistical methods. The accuracy and reliability assessments are tested through error statistics, empirical evaluation, and expert methods. The three-dimensional model analysis allows better visualization of volumetric calculations, excavation and backfilling of the lake area, tunneling of power pipelines, and calculation of on-site construction material reserves. In general, the application of engineering geological modeling makes the design work more intuitive and comprehensive, helping construction designers better identify and offer the most optimal design solutions for the project. The database always ensures the update and synchronization, as well as enables 3D modeling of geological and topographic data to integrate with the designed data according to the building information modeling. This is also the base platform for BIM & GIS integration.Keywords: database, engineering geology, 3D Model, RockWorks, Bac Ai pumped storage hydropower project
Procedia PDF Downloads 1711612 Development of an Asset Database to Enhance the Circular Business Models for the European Solar Industry: A Design Science Research Approach
Authors: Ässia Boukhatmi, Roger Nyffenegger
Abstract:
The expansion of solar energy as a means to address the climate crisis is undisputed, but the increasing number of new photovoltaic (PV) modules being put on the market is simultaneously leading to increased challenges in terms of managing the growing waste stream. Many of the discarded modules are still fully functional but are often damaged by improper handling after disassembly or not properly tested to be considered for a second life. In addition, the collection rate for dismantled PV modules in several European countries is only a fraction of previous projections, partly due to the increased number of illegal exports. The underlying problem for those market imperfections is an insufficient data exchange between the different actors along the PV value chain, as well as the limited traceability of PV panels during their lifetime. As part of the Horizon 2020 project CIRCUSOL, an asset database prototype was developed to tackle the described problems. In an iterative process applying the design science research methodology, different business models, as well as the technical implementation of the database, were established and evaluated. To explore the requirements of different stakeholders for the development of the database, surveys and in-depth interviews were conducted with various representatives of the solar industry. The proposed database prototype maps the entire value chain of PV modules, beginning with the digital product passport, which provides information about materials and components contained in every module. Product-related information can then be expanded with performance data of existing installations. This information forms the basis for the application of data analysis methods to forecast the appropriate end-of-life strategy, as well as the circular economy potential of PV modules, already before they arrive at the recycling facility. The database prototype could already be enriched with data from different data sources along the value chain. From a business model perspective, the database offers opportunities both in the area of reuse as well as with regard to the certification of sustainable modules. Here, participating actors have the opportunity to differentiate their business and exploit new revenue streams. Future research can apply this approach to further industry and product sectors, validate the database prototype in a practical context, and can serve as a basis for standardization efforts to strengthen the circular economy.Keywords: business model, circular economy, database, design science research, solar industry
Procedia PDF Downloads 1301611 Investigating Real Ship Accidents with Descriptive Analysis in Turkey
Authors: İsmail Karaca, Ömer Söner
Abstract:
The use of advanced methods has been increasing day by day in the maritime sector, which is one of the sectors least affected by the COVID-19 pandemic. It is aimed to minimize accidents, especially by using advanced methods in the investigation of marine accidents. This research aimed to conduct an exploratory statistical analysis of particular ship accidents in the Transport Safety Investigation Center of Turkey database. 46 ship accidents, which occurred between 2010-2018, have been selected from the database. In addition to the availability of a reliable and comprehensive database, taking advantage of the robust statistical models for investigation is critical to improving the safety of ships. Thus, descriptive analysis has been used in the research to identify causes and conditional factors related to different types of ship accidents. The research outcomes underline the fact that environmental factors and day and night ratio have great influence on ship safety.Keywords: descriptive analysis, maritime industry, maritime safety, ship accident statistics
Procedia PDF Downloads 1401610 Comparison between RILM, JSTOR, and WorldCat Used to Search for Secondary Literature
Authors: Stacy Jarvis
Abstract:
Databases such as JSTOR, RILM and WorldCat have been the main source and storage of literature in the music orb. The Reference Index to Music Literature is a bibliographic database of over 2.6 million citations to writings about music from over 70 countries. The Research Institute produces RILM for the Study of Music at the University of Buffalo. JSTOR is an e-library of academic journals, books, and primary sources. Database JSTOR helps scholars find, utilise, and build upon a vast range of literature through a powerful teaching and research platform. Another database, WorldCat, is the world's biggest library catalogue, assisting scholars in finding library materials online. An evaluation of these databases in the music sphere is conducted by looking into the description and intended use and finding similarities and differences among them. Through comparison, it is found that these aim to serve different purposes, though they have the same goal of providing and storing literature. Also, since each database has different parts of literature that it majors on, the intended use of the three databases is evaluated. This can be found in the description, scope, and intended uses section. These areas are crucial to the research as it addresses the functional or literature differences among the three databases. It is also found that these databases have different quantitative potentials. This is determined by addressing the year each database began collecting literature and the number of articles, periodicals, albums, conference proceedings, music, dissertations, digital media, essays collections, journal articles, monographs, online resources, reviews, and reference materials that can be found in each one of them. This can be found in the sections- description, scope and intended uses and the importance of the database in identifying literature on different topics. To compare the delivery of services to the users, the importance of databases in identifying literature on different topics is also addressed in the section -the importance of databases in identifying literature on different topics. Even though these databases are used in research, they all have disadvantages and advantages. This is addressed in the sections on advantages and disadvantages. This will be significant in determining which of the three is the best. Also, it will help address how the shortcomings of one database can be addressed by utilising two databases together while conducting research. It is addressed in the section- a combination of RILM and JSTOR. All this information revolves around the idea that a huge amount of quantitative and qualitative data can be found in the presented databases on music and digital content; however, each of the given databases has a different construction and material features contributing to the musical scholarship in its way.Keywords: RILM, JSTOR, WorldCat, database, literature, research
Procedia PDF Downloads 841609 An Optimized Association Rule Mining Algorithm
Authors: Archana Singh, Jyoti Agarwal, Ajay Rana
Abstract:
Data Mining is an efficient technology to discover patterns in large databases. Association Rule Mining techniques are used to find the correlation between the various item sets in a database, and this co-relation between various item sets are used in decision making and pattern analysis. In recent years, the problem of finding association rules from large datasets has been proposed by many researchers. Various research papers on association rule mining (ARM) are studied and analyzed first to understand the existing algorithms. Apriori algorithm is the basic ARM algorithm, but it requires so many database scans. In DIC algorithm, less amount of database scan is needed but complex data structure lattice is used. The main focus of this paper is to propose a new optimized algorithm (Friendly Algorithm) and compare its performance with the existing algorithms A data set is used to find out frequent itemsets and association rules with the help of existing and proposed (Friendly Algorithm) and it has been observed that the proposed algorithm also finds all the frequent itemsets and essential association rules from databases as compared to existing algorithms in less amount of database scan. In the proposed algorithm, an optimized data structure is used i.e. Graph and Adjacency Matrix.Keywords: association rules, data mining, dynamic item set counting, FP-growth, friendly algorithm, graph
Procedia PDF Downloads 4221608 Standard Languages for Creating a Database to Display Financial Statements on a Web Application
Authors: Vladimir Simovic, Matija Varga, Predrag Oreski
Abstract:
XHTML and XBRL are the standard languages for creating a database for the purpose of displaying financial statements on web applications. Today, XBRL is one of the most popular languages for business reporting. A large number of countries in the world recognize the role of XBRL language for financial reporting and the benefits that the reporting format provides in the collection, analysis, preparation, publication and the exchange of data (information) which is the positive side of this language. Here we present all advantages and opportunities that a company may have by using the XBRL format for business reporting. Also, this paper presents XBRL and other languages that are used for creating the database, such XML, XHTML, etc. The role of the AJAX complex model and technology will be explained in detail, and during the exchange of financial data between the web client and web server. Here will be mentioned basic layers of the network for data exchange via the web.Keywords: XHTML, XBRL, XML, JavaScript, AJAX technology, data exchange
Procedia PDF Downloads 3951607 Morphological Features Fusion for Identifying INBREAST-Database Masses Using Neural Networks and Support Vector Machines
Authors: Nadia el Atlas, Mohammed el Aroussi, Mohammed Wahbi
Abstract:
In this paper a novel technique of mass characterization based on robust features-fusion is presented. The proposed method consists of mainly four stages: (a) the first phase involves segmenting the masses using edge information’s. (b) The second phase is to calculate and fuse the most relevant morphological features. (c) The last phase is the classification step which allows us to classify the images into benign and malignant masses. In this step we have implemented Support Vectors Machines (SVM) and Artificial Neural Networks (ANN), which were evaluated with the following performance criteria: confusion matrix, accuracy, sensitivity, specificity, receiver operating characteristic ROC, and error histogram. The effectiveness of this new approach was evaluated by a recently developed database: INBREAST database. The fusion of the most appropriate morphological features provided very good results. The SVM gives accuracy to within 64.3%. Whereas the ANN classifier gives better results with an accuracy of 97.5%.Keywords: breast cancer, mammography, CAD system, features, fusion
Procedia PDF Downloads 6011606 Using Priority Order of Basic Features for Circumscribed Masses Detection in Mammograms
Authors: Minh Dong Le, Viet Dung Nguyen, Do Huu Viet, Nguyen Huu Tu
Abstract:
In this paper, we present a new method for circumscribed masses detection in mammograms. Our method is evaluated on 23 mammographic images of circumscribed masses and 20 normal mammograms from public Mini-MIAS database. The method is quite sanguine with sensitivity (SE) of 95% with only about 1 false positive per image (FPpI). To achieve above results we carry out a progression following: Firstly, the input images are preprocessed with the aim to enhance key information of circumscribed masses; Next, we calculate and evaluate statistically basic features of abnormal regions on training database; Then, mammograms on testing database are divided into equal blocks which calculated corresponding features. Finally, using priority order of basic features to classify blocks as an abnormal or normal regions.Keywords: mammograms, circumscribed masses, evaluated statistically, priority order of basic features
Procedia PDF Downloads 3351605 3D-Vehicle Associated Research Fields for Smart City via Semantic Search Approach
Authors: Haluk Eren, Mucahit Karaduman
Abstract:
This paper presents 15-year trends for scientific studies in a scientific database considering 3D and vehicle words. Two words are selected to find their associated publications in IEEE scholar database. Both of keywords are entered individually for the years 2002, 2012, and 2016 on the database to identify the preferred subjects of researchers in same years. We have classified closer research fields after searching and listing. Three years (2002, 2012, and 2016) have been investigated to figure out progress in specified time intervals. The first one is assumed as the initial progress in between 2002-2012, and the second one is in 2012-2016 that is fast development duration. We have found very interesting and beneficial results to understand the scholars’ research field preferences for a decade. This information will be highly desirable in smart city-based research purposes consisting of 3D and vehicle-related issues.Keywords: Vehicle, three-dimensional, smart city, scholarly search, semantic
Procedia PDF Downloads 3301604 Utilising an Online Data Collection Platform for the Development of a Community Engagement Database: A Case Study on Building Inter-Institutional Partnerships at UWC
Authors: P. Daniels, T. Adonis, P. September-Brown, R. Comalie
Abstract:
The community engagement unit at the University of the Western Cape was tasked with establishing a community engagement database. The database would store information of all community engagement projects related to the university. The wealth of knowledge obtained from the various disciplines would be used to facilitate interdisciplinary collaboration within the university, as well as facilitating community university partnership opportunities. The purpose of this qualitative study was to explore electronic data collection through the development of a database. Two types of electronic data collection platforms were used, namely online questionnaire and email. The semi structured questionnaire was used to collect data related to community engagement projects from different faculties and departments at the university. There are many benefits for using an electronic data collection platform, such as reduction of costs and time, ease in reaching large numbers of potential respondents, and the possibility of providing anonymity to participants. Despite all the advantages of using the electronic platform, there were as many challenges, as depicted in our findings. The findings suggest that certain barriers existed by using an electronic platform for data collection, even though it was in an academic environment, where knowledge and resources were in abundance. One of the challenges experienced in this process was the lack of dissemination of information via email to staff within faculties. The actual online software used for the questionnaire had its own limitations, such as only being able to access the questionnaire from the same electronic device. In a few cases, academics only completed the questionnaire after a telephonic prompt or face to face meeting about "Is higher education in South Africa ready to embrace electronic platform in data collection?"Keywords: community engagement, database, data collection, electronic platform, electronic tools, knowledge sharing, university
Procedia PDF Downloads 2651603 Optimizing Availability of Marine Knowledge Repository with Cloud-Based Framework
Authors: Ahmad S. Mohd Noor, Emma A. Sirajudin, Nur F. Mat Zain
Abstract:
Reliability is an important property for knowledge repository system. National Marine Bioinformatics System or NABTICS is a marine knowledge repository portal aimed to provide a baseline for marine biodiversity and a tool for researchers and developers. It is intended to be a large and growing online database and also a metadata system for inputs of research analysis. The trends of present large distributed systems such as Cloud computing are the delivery of computing as a service rather than a product. The goal of this research is to make NABTICS a system of greater availability by integrating it with Cloud based Neighbor Replication and Failure Recovery (NRFR). This can be achieved by implementation of NABTICS into distributed environment. As a result, the user can experience minimum downtime while using the system should the server is having a failure. Consequently the online database application is said to be highly available.Keywords: cloud, availability, distributed system, marine repository, database replication
Procedia PDF Downloads 4721602 Designing a Model for Preparing Reports on the Automatic Earned Value Management Progress by the Integration of Primavera P6, SQL Database, and Power BI: A Case Study of a Six-Storey Concrete Building in Mashhad, Iran
Authors: Hamed Zolfaghari, Mojtaba Kord
Abstract:
Project planners and controllers are frequently faced with the challenge of inadequate software for the preparation of automatic project progress reports based on actual project information updates. They usually make dashboards in Microsoft Excel, which is local and not applicable online. Another shortcoming is that it is not linked to planning software such as Microsoft Project, which lacks the database required for data storage. This study aimed to propose a model for the preparation of reports on automatic online project progress based on actual project information updates by the integration of Primavera P6, SQL database, and Power BI for a construction project. The designed model could be applicable to project planners and controller agents by enabling them to prepare project reports automatically and immediately after updating the project schedule using actual information. To develop the model, the data were entered into P6, and the information was stored on the SQL database. The proposed model could prepare a wide range of reports, such as earned value management, HR reports, and financial, physical, and risk reports automatically on the Power BI application. Furthermore, the reports could be published and shared online.Keywords: primavera P6, SQL, Power BI, EVM, integration management
Procedia PDF Downloads 1101601 New Approach for Constructing a Secure Biometric Database
Authors: A. Kebbeb, M. Mostefai, F. Benmerzoug, Y. Chahir
Abstract:
The multimodal biometric identification is the combination of several biometric systems. The challenge of this combination is to reduce some limitations of systems based on a single modality while significantly improving performance. In this paper, we propose a new approach to the construction and the protection of a multimodal biometric database dedicated to an identification system. We use a topological watermarking to hide the relation between face image and the registered descriptors extracted from other modalities of the same person for more secure user identification.Keywords: biometric databases, multimodal biometrics, security authentication, digital watermarking
Procedia PDF Downloads 3921600 Predictive Analysis of Personnel Relationship in Graph Database
Authors: Kay Thi Yar, Khin Mar Lar Tun
Abstract:
Nowadays, social networks are so popular and widely used in all over the world. In addition, searching personal information of each person and searching connection between them (peoples’ relation in real world) becomes interesting issue in our society. In this paper, we propose a framework with three portions for exploring peoples’ relations from their connected information. The first portion focuses on the Graph database structure to store the connected data of peoples’ information. The second one proposes the graph database searching algorithm, the Modified-SoS-ACO (Sense of Smell-Ant Colony Optimization). The last portion proposes the Deductive Reasoning Algorithm to define two persons’ relationship. This study reveals the proper storage structure for connected information, graph searching algorithm and deductive reasoning algorithm to predict and analyze the personnel relationship from peoples’ relation in their connected information.Keywords: personnel information, graph storage structure, graph searching algorithm, deductive reasoning algorithm
Procedia PDF Downloads 4521599 Refitting Equations for Peak Ground Acceleration in Light of the PF-L Database
Authors: Matevž Breška, Iztok Peruš, Vlado Stankovski
Abstract:
Systematic overview of existing Ground Motion Prediction Equations (GMPEs) has been published by Douglas. The number of earthquake recordings that have been used for fitting these equations has increased in the past decades. The current PF-L database contains 3550 recordings. Since the GMPEs frequently model the peak ground acceleration (PGA) the goal of the present study was to refit a selection of 44 of the existing equation models for PGA in light of the latest data. The algorithm Levenberg-Marquardt was used for fitting the coefficients of the equations and the results are evaluated both quantitatively by presenting the root mean squared error (RMSE) and qualitatively by drawing graphs of the five best fitted equations. The RMSE was found to be as low as 0.08 for the best equation models. The newly estimated coefficients vary from the values published in the original works.Keywords: Ground Motion Prediction Equations, Levenberg-Marquardt algorithm, refitting PF-L database, peak ground acceleration
Procedia PDF Downloads 4631598 Methylation Profiling and Validation of Candidate Tissue-Specific Differentially Methylated Regions for Identification of Human Blood, Saliva, Semen and Vaginal Fluid and Its Application in Forensics
Authors: Meenu Joshi, Natalie Naidoo, Farzeen Kader
Abstract:
Identification of body fluids is an essential step in forensic investigation to aid in crime reconstruction. Tissue-specific differentially methylated regions (tDMRs) of the human genome can be targeted to be used as biomarkers to differentiate between body fluids. The present study was undertaken to establish the methylation status of potential tDMRs in blood, semen, saliva, and vaginal fluid by using methylation-specific PCR (MSP) and bisulfite sequencing (BS). The methylation statuses of 3 potential tDMRS in genes ZNF282, PTPRS, and HPCAL1 were analysed in 10 samples of each body fluid. With MSP analysis, the ZNF282, and PTPRS1 tDMR displayed semen-specific hypomethylation while HPCAL1 tDMR showed saliva-specific hypomethylation. With quantitative analysis by BS, the ZNF282 tDMR showed statistically significant difference in overall methylation between semen and all other body fluids as well as at individual CpG sites (p < 0.05). To evaluate the effect of environmental conditions on the stability of methylation profiles of the ZNF282 tDMR, five samples of each body fluid were subjected to five different forensic simulated conditions (dry at room temperature, wet in an exsiccator, outside on the ground, sprayed with alcohol, and sprayed with bleach) for 50 days. Vaginal fluid showed highest DNA recovery under all conditions while semen had least DNA quantity. Under outside on the ground condition, all body fluids except semen showed a decrease in methylation level; however, a significant decrease in methylation level was observed for saliva. A statistical significant difference was observed for saliva and semen (p < 0.05) for outside on the ground condition. No differences in methylation level were observed for the ZNF282 tDMR under all conditions for vaginal fluid samples. Thus, in the present study ZNF282 tDMR has been identified as a novel and stable semen-specific hypomethylation marker.Keywords: body fluids, bisulphite sequencing, forensics, tDMRs, MSP
Procedia PDF Downloads 1641597 Application Water Quality Modelling In Total Maximum Daily Load (TMDL) Management: A Review
Authors: S. A. Che Osmi, W. M. F. W. Ishak, S. F. Che Osmi
Abstract:
Nowadays the issues of water quality and water pollution have been a major problem across the country. A lot of management attempt to develop their own TMDL database in order to control the river pollution. Over the past decade, the mathematical modeling has been used as the tool for the development of TMDL. This paper presents the application of water quality modeling to develop the total maximum daily load (TMDL) information. To obtain the reliable database of TMDL, the appropriate water quality modeling should choose based on the available data provided. This paper will discuss on the use of several water quality modeling such as QUAL2E, QUAL2K, and EFDC to develop TMDL. The attempts to integrate several modeling are also being discussed in this paper. Based on this paper, the differences in the application of water quality modeling based on their properties such as one, two or three dimensional are showing their ability to develop the modeling of TMDL database.Keywords: TMDL, water quality modeling, QUAL2E, EFDC
Procedia PDF Downloads 4411596 Modelling of Geotechnical Data Using Geographic Information System and MATLAB for Eastern Ahmedabad City, Gujarat
Authors: Rahul Patel
Abstract:
Ahmedabad, a city located in western India, is experiencing rapid growth due to urbanization and industrialization. It is projected to become a metropolitan city in the near future, resulting in various construction activities. Soil testing is necessary before construction can commence, requiring construction companies and contractors to periodically conduct soil testing. The focus of this study is on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical (Geo)-database involves three steps: collecting borehole data from reputable sources, verifying the accuracy and redundancy of the data, and standardizing and organizing the geotechnical information for integration into the database. Once the database is complete, it is integrated with GIS, allowing users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. This GIS map enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This study highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers.Keywords: ArcGIS, borehole data, geographic information system, geo-database, interpolation, SPT N-value, soil classification, Φ-Value, bearing capacity
Procedia PDF Downloads 751595 Developing a Town Based Soil Database to Assess the Sensitive Zones in Nutrient Management
Authors: Sefa Aksu, Ünal Kızıl
Abstract:
For this study, a town based soil database created in Gümüşçay District of Biga Town, Çanakkale, Turkey. Crop and livestock production are major activities in the district. Nutrient management is mainly based on commercial fertilizer application ignoring the livestock manure. Within the boundaries of district, 122 soil sampling points determined over the satellite image. Soil samples collected from the determined points with the help of handheld Global Positioning System. Labeled samples were sent to a commercial laboratory to determine 11 soil parameters including salinity, pH, lime, organic matter, nitrogen, phosphorus, potassium, iron, manganese, copper and zinc. Based on the test results soil maps for mentioned parameters were developed using remote sensing, GIS, and geostatistical analysis. In this study we developed a GIS database that will be used for soil nutrient management. Methods were explained and soil maps and their interpretations were summarized in the study.Keywords: geostatistics, GIS, nutrient management, soil mapping
Procedia PDF Downloads 3751594 Gender Perspective in Peace Operations: An Analysis of 14 UN Peace Operations
Authors: Maressa Aires de Proenca
Abstract:
The inclusion of a gender perspective in peace operations is based on a series of conventions, treaties, and resolutions designed to protect and include women addressing gender mainstreaming. The UN Security Council recognizes that women's participation and gender equality within peace operations are indispensable for achieving sustainable development and peace. However, the participation of women in the field of peace and security is still embryonic. There are gaps when we think about female participation in conflict resolution and peace promotion spaces, and it does not seem clear how women are present in these spaces. This absence may correspond to silence about representation and the guarantee of the female perspective within the context of peace promotion. Thus, the present research aimed to describe the panorama of the participation of women who are currently active in the 14 active UN peace operations, which are: 1) MINUJUSTH, Haiti, 2) MINURSO, Western Sahara, 3) MINUSCA, Central African Republic, 4) MINUSMA, Mali, 5) MONUSCO, the Democratic Republic of the Congo, 6) UNAMID, Darfur, 7) UNDOF, Golan, 8) UNFICYP, Cyprus, 9) UNIFIL, Lebanon, 10) UNISFA, Abyei, 11) UNMIK, Kosovo, 12) UNMISS, South Sudan, 13) UNMOGIP, India, and Pakistan, and 14) UNTSO, Middle East. A database was constructed that reported: (1) position held by the woman in the peace operation, (2) her profession, (3) educational level, (4) marital status, (5) religion, (6) nationality, (8) number of years working with peace operations, (9) whether the operation in which it operates has provided training on gender issues. For the construction of this database, official reports and statistics accessed through the UN Peacekeeping Resource Hub were used; The United Nations Statistical Commission, Peacekeeping Master Open Datasets, The Armed Conflict Database (ACD), The International Institute for Strategic Studies (IISS) database; Armed Conflict Location & Event Data Project (ACLED) database; from the Evidence and Data for Gender Equality (EDGE) database. In addition to access to databases, peacekeeping operations will be contacted directly, and data requested individually. The database showed that the presence of women in these peace operations is still incipient, but growing. There are few women in command positions, and most of them occupy administrative or human-care positions.Keywords: women, peace and security, peacekeeping operations, peace studies
Procedia PDF Downloads 1371593 Alphabet Recognition Using Pixel Probability Distribution
Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay
Abstract:
Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix
Procedia PDF Downloads 3901592 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life
Authors: Desplanches Maxime
Abstract:
Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression
Procedia PDF Downloads 71