Search results for: MNIST database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1593

Search results for: MNIST database

1533 Development of an Asset Database to Enhance the Circular Business Models for the European Solar Industry: A Design Science Research Approach

Authors: Ässia Boukhatmi, Roger Nyffenegger

Abstract:

The expansion of solar energy as a means to address the climate crisis is undisputed, but the increasing number of new photovoltaic (PV) modules being put on the market is simultaneously leading to increased challenges in terms of managing the growing waste stream. Many of the discarded modules are still fully functional but are often damaged by improper handling after disassembly or not properly tested to be considered for a second life. In addition, the collection rate for dismantled PV modules in several European countries is only a fraction of previous projections, partly due to the increased number of illegal exports. The underlying problem for those market imperfections is an insufficient data exchange between the different actors along the PV value chain, as well as the limited traceability of PV panels during their lifetime. As part of the Horizon 2020 project CIRCUSOL, an asset database prototype was developed to tackle the described problems. In an iterative process applying the design science research methodology, different business models, as well as the technical implementation of the database, were established and evaluated. To explore the requirements of different stakeholders for the development of the database, surveys and in-depth interviews were conducted with various representatives of the solar industry. The proposed database prototype maps the entire value chain of PV modules, beginning with the digital product passport, which provides information about materials and components contained in every module. Product-related information can then be expanded with performance data of existing installations. This information forms the basis for the application of data analysis methods to forecast the appropriate end-of-life strategy, as well as the circular economy potential of PV modules, already before they arrive at the recycling facility. The database prototype could already be enriched with data from different data sources along the value chain. From a business model perspective, the database offers opportunities both in the area of reuse as well as with regard to the certification of sustainable modules. Here, participating actors have the opportunity to differentiate their business and exploit new revenue streams. Future research can apply this approach to further industry and product sectors, validate the database prototype in a practical context, and can serve as a basis for standardization efforts to strengthen the circular economy.

Keywords: business model, circular economy, database, design science research, solar industry

Procedia PDF Downloads 80
1532 Investigating Real Ship Accidents with Descriptive Analysis in Turkey

Authors: İsmail Karaca, Ömer Söner

Abstract:

The use of advanced methods has been increasing day by day in the maritime sector, which is one of the sectors least affected by the COVID-19 pandemic. It is aimed to minimize accidents, especially by using advanced methods in the investigation of marine accidents. This research aimed to conduct an exploratory statistical analysis of particular ship accidents in the Transport Safety Investigation Center of Turkey database. 46 ship accidents, which occurred between 2010-2018, have been selected from the database. In addition to the availability of a reliable and comprehensive database, taking advantage of the robust statistical models for investigation is critical to improving the safety of ships. Thus, descriptive analysis has been used in the research to identify causes and conditional factors related to different types of ship accidents. The research outcomes underline the fact that environmental factors and day and night ratio have great influence on ship safety.

Keywords: descriptive analysis, maritime industry, maritime safety, ship accident statistics

Procedia PDF Downloads 115
1531 Comparison between RILM, JSTOR, and WorldCat Used to Search for Secondary Literature

Authors: Stacy Jarvis

Abstract:

Databases such as JSTOR, RILM and WorldCat have been the main source and storage of literature in the music orb. The Reference Index to Music Literature is a bibliographic database of over 2.6 million citations to writings about music from over 70 countries. The Research Institute produces RILM for the Study of Music at the University of Buffalo. JSTOR is an e-library of academic journals, books, and primary sources. Database JSTOR helps scholars find, utilise, and build upon a vast range of literature through a powerful teaching and research platform. Another database, WorldCat, is the world's biggest library catalogue, assisting scholars in finding library materials online. An evaluation of these databases in the music sphere is conducted by looking into the description and intended use and finding similarities and differences among them. Through comparison, it is found that these aim to serve different purposes, though they have the same goal of providing and storing literature. Also, since each database has different parts of literature that it majors on, the intended use of the three databases is evaluated. This can be found in the description, scope, and intended uses section. These areas are crucial to the research as it addresses the functional or literature differences among the three databases. It is also found that these databases have different quantitative potentials. This is determined by addressing the year each database began collecting literature and the number of articles, periodicals, albums, conference proceedings, music, dissertations, digital media, essays collections, journal articles, monographs, online resources, reviews, and reference materials that can be found in each one of them. This can be found in the sections- description, scope and intended uses and the importance of the database in identifying literature on different topics. To compare the delivery of services to the users, the importance of databases in identifying literature on different topics is also addressed in the section -the importance of databases in identifying literature on different topics. Even though these databases are used in research, they all have disadvantages and advantages. This is addressed in the sections on advantages and disadvantages. This will be significant in determining which of the three is the best. Also, it will help address how the shortcomings of one database can be addressed by utilising two databases together while conducting research. It is addressed in the section- a combination of RILM and JSTOR. All this information revolves around the idea that a huge amount of quantitative and qualitative data can be found in the presented databases on music and digital content; however, each of the given databases has a different construction and material features contributing to the musical scholarship in its way.

Keywords: RILM, JSTOR, WorldCat, database, literature, research

Procedia PDF Downloads 60
1530 An Optimized Association Rule Mining Algorithm

Authors: Archana Singh, Jyoti Agarwal, Ajay Rana

Abstract:

Data Mining is an efficient technology to discover patterns in large databases. Association Rule Mining techniques are used to find the correlation between the various item sets in a database, and this co-relation between various item sets are used in decision making and pattern analysis. In recent years, the problem of finding association rules from large datasets has been proposed by many researchers. Various research papers on association rule mining (ARM) are studied and analyzed first to understand the existing algorithms. Apriori algorithm is the basic ARM algorithm, but it requires so many database scans. In DIC algorithm, less amount of database scan is needed but complex data structure lattice is used. The main focus of this paper is to propose a new optimized algorithm (Friendly Algorithm) and compare its performance with the existing algorithms A data set is used to find out frequent itemsets and association rules with the help of existing and proposed (Friendly Algorithm) and it has been observed that the proposed algorithm also finds all the frequent itemsets and essential association rules from databases as compared to existing algorithms in less amount of database scan. In the proposed algorithm, an optimized data structure is used i.e. Graph and Adjacency Matrix.

Keywords: association rules, data mining, dynamic item set counting, FP-growth, friendly algorithm, graph

Procedia PDF Downloads 389
1529 Standard Languages for Creating a Database to Display Financial Statements on a Web Application

Authors: Vladimir Simovic, Matija Varga, Predrag Oreski

Abstract:

XHTML and XBRL are the standard languages for creating a database for the purpose of displaying financial statements on web applications. Today, XBRL is one of the most popular languages for business reporting. A large number of countries in the world recognize the role of XBRL language for financial reporting and the benefits that the reporting format provides in the collection, analysis, preparation, publication and the exchange of data (information) which is the positive side of this language. Here we present all advantages and opportunities that a company may have by using the XBRL format for business reporting. Also, this paper presents XBRL and other languages that are used for creating the database, such XML, XHTML, etc. The role of the AJAX complex model and technology will be explained in detail, and during the exchange of financial data between the web client and web server. Here will be mentioned basic layers of the network for data exchange via the web.

Keywords: XHTML, XBRL, XML, JavaScript, AJAX technology, data exchange

Procedia PDF Downloads 369
1528 Morphological Features Fusion for Identifying INBREAST-Database Masses Using Neural Networks and Support Vector Machines

Authors: Nadia el Atlas, Mohammed el Aroussi, Mohammed Wahbi

Abstract:

In this paper a novel technique of mass characterization based on robust features-fusion is presented. The proposed method consists of mainly four stages: (a) the first phase involves segmenting the masses using edge information’s. (b) The second phase is to calculate and fuse the most relevant morphological features. (c) The last phase is the classification step which allows us to classify the images into benign and malignant masses. In this step we have implemented Support Vectors Machines (SVM) and Artificial Neural Networks (ANN), which were evaluated with the following performance criteria: confusion matrix, accuracy, sensitivity, specificity, receiver operating characteristic ROC, and error histogram. The effectiveness of this new approach was evaluated by a recently developed database: INBREAST database. The fusion of the most appropriate morphological features provided very good results. The SVM gives accuracy to within 64.3%. Whereas the ANN classifier gives better results with an accuracy of 97.5%.

Keywords: breast cancer, mammography, CAD system, features, fusion

Procedia PDF Downloads 567
1527 Using Priority Order of Basic Features for Circumscribed Masses Detection in Mammograms

Authors: Minh Dong Le, Viet Dung Nguyen, Do Huu Viet, Nguyen Huu Tu

Abstract:

In this paper, we present a new method for circumscribed masses detection in mammograms. Our method is evaluated on 23 mammographic images of circumscribed masses and 20 normal mammograms from public Mini-MIAS database. The method is quite sanguine with sensitivity (SE) of 95% with only about 1 false positive per image (FPpI). To achieve above results we carry out a progression following: Firstly, the input images are preprocessed with the aim to enhance key information of circumscribed masses; Next, we calculate and evaluate statistically basic features of abnormal regions on training database; Then, mammograms on testing database are divided into equal blocks which calculated corresponding features. Finally, using priority order of basic features to classify blocks as an abnormal or normal regions.

Keywords: mammograms, circumscribed masses, evaluated statistically, priority order of basic features

Procedia PDF Downloads 304
1526 3D-Vehicle Associated Research Fields for Smart City via Semantic Search Approach

Authors: Haluk Eren, Mucahit Karaduman

Abstract:

This paper presents 15-year trends for scientific studies in a scientific database considering 3D and vehicle words. Two words are selected to find their associated publications in IEEE scholar database. Both of keywords are entered individually for the years 2002, 2012, and 2016 on the database to identify the preferred subjects of researchers in same years. We have classified closer research fields after searching and listing. Three years (2002, 2012, and 2016) have been investigated to figure out progress in specified time intervals. The first one is assumed as the initial progress in between 2002-2012, and the second one is in 2012-2016 that is fast development duration. We have found very interesting and beneficial results to understand the scholars’ research field preferences for a decade. This information will be highly desirable in smart city-based research purposes consisting of 3D and vehicle-related issues.

Keywords: Vehicle, three-dimensional, smart city, scholarly search, semantic

Procedia PDF Downloads 294
1525 Utilising an Online Data Collection Platform for the Development of a Community Engagement Database: A Case Study on Building Inter-Institutional Partnerships at UWC

Authors: P. Daniels, T. Adonis, P. September-Brown, R. Comalie

Abstract:

The community engagement unit at the University of the Western Cape was tasked with establishing a community engagement database. The database would store information of all community engagement projects related to the university. The wealth of knowledge obtained from the various disciplines would be used to facilitate interdisciplinary collaboration within the university, as well as facilitating community university partnership opportunities. The purpose of this qualitative study was to explore electronic data collection through the development of a database. Two types of electronic data collection platforms were used, namely online questionnaire and email. The semi structured questionnaire was used to collect data related to community engagement projects from different faculties and departments at the university. There are many benefits for using an electronic data collection platform, such as reduction of costs and time, ease in reaching large numbers of potential respondents, and the possibility of providing anonymity to participants. Despite all the advantages of using the electronic platform, there were as many challenges, as depicted in our findings. The findings suggest that certain barriers existed by using an electronic platform for data collection, even though it was in an academic environment, where knowledge and resources were in abundance. One of the challenges experienced in this process was the lack of dissemination of information via email to staff within faculties. The actual online software used for the questionnaire had its own limitations, such as only being able to access the questionnaire from the same electronic device. In a few cases, academics only completed the questionnaire after a telephonic prompt or face to face meeting about "Is higher education in South Africa ready to embrace electronic platform in data collection?"

Keywords: community engagement, database, data collection, electronic platform, electronic tools, knowledge sharing, university

Procedia PDF Downloads 235
1524 Optimizing Availability of Marine Knowledge Repository with Cloud-Based Framework

Authors: Ahmad S. Mohd Noor, Emma A. Sirajudin, Nur F. Mat Zain

Abstract:

Reliability is an important property for knowledge repository system. National Marine Bioinformatics System or NABTICS is a marine knowledge repository portal aimed to provide a baseline for marine biodiversity and a tool for researchers and developers. It is intended to be a large and growing online database and also a metadata system for inputs of research analysis. The trends of present large distributed systems such as Cloud computing are the delivery of computing as a service rather than a product. The goal of this research is to make NABTICS a system of greater availability by integrating it with Cloud based Neighbor Replication and Failure Recovery (NRFR). This can be achieved by implementation of NABTICS into distributed environment. As a result, the user can experience minimum downtime while using the system should the server is having a failure. Consequently the online database application is said to be highly available.

Keywords: cloud, availability, distributed system, marine repository, database replication

Procedia PDF Downloads 443
1523 Designing a Model for Preparing Reports on the Automatic Earned Value Management Progress by the Integration of Primavera P6, SQL Database, and Power BI: A Case Study of a Six-Storey Concrete Building in Mashhad, Iran

Authors: Hamed Zolfaghari, Mojtaba Kord

Abstract:

Project planners and controllers are frequently faced with the challenge of inadequate software for the preparation of automatic project progress reports based on actual project information updates. They usually make dashboards in Microsoft Excel, which is local and not applicable online. Another shortcoming is that it is not linked to planning software such as Microsoft Project, which lacks the database required for data storage. This study aimed to propose a model for the preparation of reports on automatic online project progress based on actual project information updates by the integration of Primavera P6, SQL database, and Power BI for a construction project. The designed model could be applicable to project planners and controller agents by enabling them to prepare project reports automatically and immediately after updating the project schedule using actual information. To develop the model, the data were entered into P6, and the information was stored on the SQL database. The proposed model could prepare a wide range of reports, such as earned value management, HR reports, and financial, physical, and risk reports automatically on the Power BI application. Furthermore, the reports could be published and shared online.

Keywords: primavera P6, SQL, Power BI, EVM, integration management

Procedia PDF Downloads 65
1522 New Approach for Constructing a Secure Biometric Database

Authors: A. Kebbeb, M. Mostefai, F. Benmerzoug, Y. Chahir

Abstract:

The multimodal biometric identification is the combination of several biometric systems. The challenge of this combination is to reduce some limitations of systems based on a single modality while significantly improving performance. In this paper, we propose a new approach to the construction and the protection of a multimodal biometric database dedicated to an identification system. We use a topological watermarking to hide the relation between face image and the registered descriptors extracted from other modalities of the same person for more secure user identification.

Keywords: biometric databases, multimodal biometrics, security authentication, digital watermarking

Procedia PDF Downloads 345
1521 Predictive Analysis of Personnel Relationship in Graph Database

Authors: Kay Thi Yar, Khin Mar Lar Tun

Abstract:

Nowadays, social networks are so popular and widely used in all over the world. In addition, searching personal information of each person and searching connection between them (peoples’ relation in real world) becomes interesting issue in our society. In this paper, we propose a framework with three portions for exploring peoples’ relations from their connected information. The first portion focuses on the Graph database structure to store the connected data of peoples’ information. The second one proposes the graph database searching algorithm, the Modified-SoS-ACO (Sense of Smell-Ant Colony Optimization). The last portion proposes the Deductive Reasoning Algorithm to define two persons’ relationship. This study reveals the proper storage structure for connected information, graph searching algorithm and deductive reasoning algorithm to predict and analyze the personnel relationship from peoples’ relation in their connected information.

Keywords: personnel information, graph storage structure, graph searching algorithm, deductive reasoning algorithm

Procedia PDF Downloads 416
1520 Refitting Equations for Peak Ground Acceleration in Light of the PF-L Database

Authors: Matevž Breška, Iztok Peruš, Vlado Stankovski

Abstract:

Systematic overview of existing Ground Motion Prediction Equations (GMPEs) has been published by Douglas. The number of earthquake recordings that have been used for fitting these equations has increased in the past decades. The current PF-L database contains 3550 recordings. Since the GMPEs frequently model the peak ground acceleration (PGA) the goal of the present study was to refit a selection of 44 of the existing equation models for PGA in light of the latest data. The algorithm Levenberg-Marquardt was used for fitting the coefficients of the equations and the results are evaluated both quantitatively by presenting the root mean squared error (RMSE) and qualitatively by drawing graphs of the five best fitted equations. The RMSE was found to be as low as 0.08 for the best equation models. The newly estimated coefficients vary from the values published in the original works.

Keywords: Ground Motion Prediction Equations, Levenberg-Marquardt algorithm, refitting PF-L database, peak ground acceleration

Procedia PDF Downloads 427
1519 Application Water Quality Modelling In Total Maximum Daily Load (TMDL) Management: A Review

Authors: S. A. Che Osmi, W. M. F. W. Ishak, S. F. Che Osmi

Abstract:

Nowadays the issues of water quality and water pollution have been a major problem across the country. A lot of management attempt to develop their own TMDL database in order to control the river pollution. Over the past decade, the mathematical modeling has been used as the tool for the development of TMDL. This paper presents the application of water quality modeling to develop the total maximum daily load (TMDL) information. To obtain the reliable database of TMDL, the appropriate water quality modeling should choose based on the available data provided. This paper will discuss on the use of several water quality modeling such as QUAL2E, QUAL2K, and EFDC to develop TMDL. The attempts to integrate several modeling are also being discussed in this paper. Based on this paper, the differences in the application of water quality modeling based on their properties such as one, two or three dimensional are showing their ability to develop the modeling of TMDL database.

Keywords: TMDL, water quality modeling, QUAL2E, EFDC

Procedia PDF Downloads 402
1518 Modelling of Geotechnical Data Using Geographic Information System and MATLAB for Eastern Ahmedabad City, Gujarat

Authors: Rahul Patel

Abstract:

Ahmedabad, a city located in western India, is experiencing rapid growth due to urbanization and industrialization. It is projected to become a metropolitan city in the near future, resulting in various construction activities. Soil testing is necessary before construction can commence, requiring construction companies and contractors to periodically conduct soil testing. The focus of this study is on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical (Geo)-database involves three steps: collecting borehole data from reputable sources, verifying the accuracy and redundancy of the data, and standardizing and organizing the geotechnical information for integration into the database. Once the database is complete, it is integrated with GIS, allowing users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. This GIS map enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This study highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers.

Keywords: ArcGIS, borehole data, geographic information system, geo-database, interpolation, SPT N-value, soil classification, Φ-Value, bearing capacity

Procedia PDF Downloads 47
1517 Developing a Town Based Soil Database to Assess the Sensitive Zones in Nutrient Management

Authors: Sefa Aksu, Ünal Kızıl

Abstract:

For this study, a town based soil database created in Gümüşçay District of Biga Town, Çanakkale, Turkey. Crop and livestock production are major activities in the district. Nutrient management is mainly based on commercial fertilizer application ignoring the livestock manure. Within the boundaries of district, 122 soil sampling points determined over the satellite image. Soil samples collected from the determined points with the help of handheld Global Positioning System. Labeled samples were sent to a commercial laboratory to determine 11 soil parameters including salinity, pH, lime, organic matter, nitrogen, phosphorus, potassium, iron, manganese, copper and zinc. Based on the test results soil maps for mentioned parameters were developed using remote sensing, GIS, and geostatistical analysis. In this study we developed a GIS database that will be used for soil nutrient management. Methods were explained and soil maps and their interpretations were summarized in the study.

Keywords: geostatistics, GIS, nutrient management, soil mapping

Procedia PDF Downloads 346
1516 Gender Perspective in Peace Operations: An Analysis of 14 UN Peace Operations

Authors: Maressa Aires de Proenca

Abstract:

The inclusion of a gender perspective in peace operations is based on a series of conventions, treaties, and resolutions designed to protect and include women addressing gender mainstreaming. The UN Security Council recognizes that women's participation and gender equality within peace operations are indispensable for achieving sustainable development and peace. However, the participation of women in the field of peace and security is still embryonic. There are gaps when we think about female participation in conflict resolution and peace promotion spaces, and it does not seem clear how women are present in these spaces. This absence may correspond to silence about representation and the guarantee of the female perspective within the context of peace promotion. Thus, the present research aimed to describe the panorama of the participation of women who are currently active in the 14 active UN peace operations, which are: 1) MINUJUSTH, Haiti, 2) MINURSO, Western Sahara, 3) MINUSCA, Central African Republic, 4) MINUSMA, Mali, 5) MONUSCO, the Democratic Republic of the Congo, 6) UNAMID, Darfur, 7) UNDOF, Golan, 8) UNFICYP, Cyprus, 9) UNIFIL, Lebanon, 10) UNISFA, Abyei, 11) UNMIK, Kosovo, 12) UNMISS, South Sudan, 13) UNMOGIP, India, and Pakistan, and 14) UNTSO, Middle East. A database was constructed that reported: (1) position held by the woman in the peace operation, (2) her profession, (3) educational level, (4) marital status, (5) religion, (6) nationality, (8) number of years working with peace operations, (9) whether the operation in which it operates has provided training on gender issues. For the construction of this database, official reports and statistics accessed through the UN Peacekeeping Resource Hub were used; The United Nations Statistical Commission, Peacekeeping Master Open Datasets, The Armed Conflict Database (ACD), The International Institute for Strategic Studies (IISS) database; Armed Conflict Location & Event Data Project (ACLED) database; from the Evidence and Data for Gender Equality (EDGE) database. In addition to access to databases, peacekeeping operations will be contacted directly, and data requested individually. The database showed that the presence of women in these peace operations is still incipient, but growing. There are few women in command positions, and most of them occupy administrative or human-care positions.

Keywords: women, peace and security, peacekeeping operations, peace studies

Procedia PDF Downloads 116
1515 Alphabet Recognition Using Pixel Probability Distribution

Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay

Abstract:

Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.

Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix

Procedia PDF Downloads 357
1514 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 33
1513 Gis Database Creation for Impacts of Domestic Wastewater Disposal on BIDA Town, Niger State Nigeria

Authors: Ejiobih Hyginus Chidozie

Abstract:

Geographic Information System (GIS) is a configuration of computer hardware and software specifically designed to effectively capture, store, update, manipulate, analyse and display and display all forms of spatially referenced information. GIS database is referred to as the heart of GIS. It has location data, attribute data and spatial relationship between the objects and their attributes. Sewage and wastewater management have assumed increased importance lately as a result of general concern expressed worldwide about the problems of pollution of the environment contamination of the atmosphere, rivers, lakes, oceans and ground water. In this research GIS database was created to study the impacts of domestic wastewater disposal methods on Bida town, Niger State as a model for investigating similar impacts on other cities in Nigeria. Results from GIS database are very useful to decision makers and researchers. Bida Town was subdivided into four regions, eight zones, and 24 sectors based on the prevailing natural morphology of the town. GIS receiver and structured questionnaire were used to collect information and attribute data from 240 households of the study area. Domestic wastewater samples were collected from twenty four sectors of the study area for laboratory analysis. ArcView 3.2a GIS software, was used to create the GIS databases for ecological, health and socioeconomic impacts of domestic wastewater disposal methods in Bida town.

Keywords: environment, GIS, pollution, software, wastewater

Procedia PDF Downloads 381
1512 Subspace Rotation Algorithm for Implementing Restricted Hopfield Network as an Auto-Associative Memory

Authors: Ci Lin, Tet Yeap, Iluju Kiringa

Abstract:

This paper introduces the subspace rotation algorithm (SRA) to train the Restricted Hopfield Network (RHN) as an auto-associative memory. Subspace rotation algorithm is a gradient-free subspace tracking approach based on the singular value decomposition (SVD). In comparison with Backpropagation Through Time (BPTT) on training RHN, it is observed that SRA could always converge to the optimal solution and BPTT could not achieve the same performance when the model becomes complex, and the number of patterns is large. The AUTS case study showed that the RHN model trained by SRA could achieve a better structure of attraction basin with larger radius(in general) than the Hopfield Network(HNN) model trained by Hebbian learning rule. Through learning 10000 patterns from MNIST dataset with RHN models with different number of hidden nodes, it is observed that an several components could be adjusted to achieve a balance between recovery accuracy and noise resistance.

Keywords: hopfield neural network, restricted hopfield network, subspace rotation algorithm, hebbian learning rule

Procedia PDF Downloads 89
1511 Local Texture and Global Color Descriptors for Content Based Image Retrieval

Authors: Tajinder Kaur, Anu Bala

Abstract:

An image retrieval system is a computer system for browsing, searching, and retrieving images from a large database of digital images a new algorithm meant for content-based image retrieval (CBIR) is presented in this paper. The proposed method combines the color and texture features which are extracted the global and local information of the image. The local texture feature is extracted by using local binary patterns (LBP), which are evaluated by taking into consideration of local difference between the center pixel and its neighbors. For the global color feature, the color histogram (CH) is used which is calculated by RGB (red, green, and blue) spaces separately. In this paper, the combination of color and texture features are proposed for content-based image retrieval. The performance of the proposed method is tested on Corel 1000 database which is the natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP and CH.

Keywords: color, texture, feature extraction, local binary patterns, image retrieval

Procedia PDF Downloads 330
1510 Database Playlists: Croatia's Popular Music in the Mirror of Collective Memory

Authors: Diana Grguric, Robert Svetlacic, Vladimir Simovic

Abstract:

Scientific research analytically explores database playlists by studying the memory culture through Croatian popular radio music. The research is based on the scientific analysis of databases developed on the basis of the playlist of ten Croatian radio stations. The most recent Croatian song on Statehood Day 2008-2013 is analyzed in order to gain insight into their (memory) potential in terms of storing, interpreting and presenting a national identity. The research starts with the general assumption that popular music is an efficient identifier, transmitter, and promoter of national identity. The aim of the scientific research of the database was to analytically reveal specific titles of Croatian popular songs that participate in marking memories and analyzing their symbolic capital to gain insight into the popular music experience of the past and to develop a new method of scientifically based analysis of specific databases.

Keywords: specific databases, popular radio music, collective memory, national identity

Procedia PDF Downloads 329
1509 SIPTOX: Spider Toxin Database Information Repository System of Protein Toxins from Spiders by Using MySQL Method

Authors: Iftikhar Tayubi, Tabrej Khan, Rayan Alsulmi, Abdulrahman Labban

Abstract:

Spider produces a special kind of substance. This special kind of substance is called a toxin. The toxin is composed of many types of protein, which differs from species to species. Spider toxin consists of several proteins and non-proteins that include various categories of toxins like myotoxin, neurotoxin, cardiotoxin, dendrotoxin, haemorrhagins, and fibrinolytic enzyme. Protein Sequence information with references of toxins was derived from literature and public databases. From the previous findings, the Spider toxin would be the best choice to treat different types of tumors and cancer. There are many therapeutic regimes, which causes more side effects than treatment hence a different approach must be adopted for the treatment of cancer. The combinations of drugs are being encouraged, and dramatic outcomes are reported. Spider toxin is one of the natural cytotoxic compounds. Hence, it is being used to treat different types of tumors; especially its positive effect on breast cancer is being reported during the last few decades. The efficacy of this database is that it can provide a user-friendly interface for users to retrieve the information about Spiders, toxin and toxin protein of different Spiders species. SPIDTOXD provides a single source information about spider toxins, which will be useful for pharmacologists, neuroscientists, toxicologists, medicinal chemists. The well-ordered and accessible web interface allows users to explore the detail information of Spider and toxin proteins. It includes common name, scientific name, entry id, entry name, protein name and length of the protein sequence. The utility of this database is that it can provide a user-friendly interface for users to retrieve the information about Spider, toxin and toxin protein of different Spider species. The database interfaces will satisfy the demands of the scientific community by providing in-depth knowledge about Spider and its toxin. We have adopted the methodology by using A MySQL and PHP and for designing, we used the Smart Draw. The users can thus navigate from one section to another, depending on the field of interest of the user. This database contains a wealth of information on species, toxins, and clinical data, etc. This database will be useful for the scientific community, basic researchers and those interested in potential pharmaceutical Industry.

Keywords: siptoxd, php, mysql, toxin

Procedia PDF Downloads 148
1508 3D Objects Indexing Using Spherical Harmonic for Optimum Measurement Similarity

Authors: S. Hellam, Y. Oulahrir, F. El Mounchid, A. Sadiq, S. Mbarki

Abstract:

In this paper, we propose a method for three-dimensional (3-D)-model indexing based on defining a new descriptor, which we call new descriptor using spherical harmonics. The purpose of the method is to minimize, the processing time on the database of objects models and the searching time of similar objects to request object. Firstly we start by defining the new descriptor using a new division of 3-D object in a sphere. Then we define a new distance which will be used in the search for similar objects in the database.

Keywords: 3D indexation, spherical harmonic, similarity of 3D objects, measurement similarity

Procedia PDF Downloads 401
1507 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database

Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani

Abstract:

The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.

Keywords: residual analysis, GMPE, western balkan, strong motion, openquake

Procedia PDF Downloads 45
1506 Scaling Siamese Neural Network for Cross-Domain Few Shot Learning in Medical Imaging

Authors: Jinan Fiaidhi, Sabah Mohammed

Abstract:

Cross-domain learning in the medical field is a research challenge as many conditions, like in oncology imaging, use different imaging modalities. Moreover, in most of the medical learning applications, the sample training size is relatively small. Although few-shot learning (FSL) through the use of a Siamese neural network was able to be trained on a small sample with remarkable accuracy, FSL fails to be effective for use in multiple domains as their convolution weights are set for task-specific applications. In this paper, we are addressing this problem by enabling FSL to possess the ability to shift across domains by designing a two-layer FSL network that can learn individually from each domain and produce a shared features map with extra modulation to be used at the second layer that can recognize important targets from mix domains. Our initial experimentations based on mixed medical datasets like the Medical-MNIST reveal promising results. We aim to continue this research to perform full-scale analytics for testing our cross-domain FSL learning.

Keywords: Siamese neural network, few-shot learning, meta-learning, metric-based learning, thick data transformation and analytics

Procedia PDF Downloads 7
1505 Indoor Localization by Pattern Matching Method Based on Extended Database

Authors: Gyumin Hwang, Jihong Lee

Abstract:

This paper studied the CSS-based indoor localization system which is easy to implement, inexpensive to compose the systems, additionally CSS-based indoor localization system covers larger area than other system. However, this system has problem which is affected by reflected distance data. This problem in localization is caused by the multi-path effect. Error caused by multi-path is difficult to be corrected because the indoor environment cannot be described. In this paper, in order to solve the problem by multi-path, we have supplemented the localization system by using pattern matching method based on extended database. Thereby, this method improves precision of estimated. Also this method is verified by experiments in gymnasium. Database was constructed by 1 m intervals, and 16 sample data were collected from random position inside the region of DB points. As a result, this paper shows higher accuracy than existing method through graph and table.

Keywords: chirp spread spectrum, indoor localization, pattern-matching, time of arrival, multi-path, mahalanobis distance, reception rate, simultaneous localization and mapping, laser range finder

Procedia PDF Downloads 217
1504 Optimizing the Capacity of a Convolutional Neural Network for Image Segmentation and Pattern Recognition

Authors: Yalong Jiang, Zheru Chi

Abstract:

In this paper, we study the factors which determine the capacity of a Convolutional Neural Network (CNN) model and propose the ways to evaluate and adjust the capacity of a CNN model for best matching to a specific pattern recognition task. Firstly, a scheme is proposed to adjust the number of independent functional units within a CNN model to make it be better fitted to a task. Secondly, the number of independent functional units in the capsule network is adjusted to fit it to the training dataset. Thirdly, a method based on Bayesian GAN is proposed to enrich the variances in the current dataset to increase its complexity. Experimental results on the PASCAL VOC 2010 Person Part dataset and the MNIST dataset show that, in both conventional CNN models and capsule networks, the number of independent functional units is an important factor that determines the capacity of a network model. By adjusting the number of functional units, the capacity of a model can better match the complexity of a dataset.

Keywords: CNN, convolutional neural network, capsule network, capacity optimization, character recognition, data augmentation, semantic segmentation

Procedia PDF Downloads 122