Search results for: labeled faces in the wild (LFW) database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2671

Search results for: labeled faces in the wild (LFW) database

2521 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: in-memory database, disk-based system, hybrid database, concurrency control

Procedia PDF Downloads 387
2520 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge

Authors: K. Eryilmaz, G. Mercanoglu

Abstract:

Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.

Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis

Procedia PDF Downloads 137
2519 Perusing the Influence of a Visual Editor in Enabling PostgreSQL Query Learn-Ability

Authors: Manuela Nayantara Jeyaraj

Abstract:

PostgreSQL is an Object-Relational Database Management System (ORDBMS) with an architecture that ensures optimal quality data management. But due to the shading growth of similar ORDBMS, PostgreSQL has not been renowned among the database user community. Despite having its features and in-built functionalities shadowed, PostgreSQL renders a vast range of utilities for data manipulation and hence calling for it to be upheld more among users. But introducing PostgreSQL in order to stimulate its advantageous features among users, mandates endorsing learn-ability as an add-on as the target groups considered consist of both amateur as well as professional PostgreSQL users. The scope of this paper deliberates providing easy contemplation of query formulations and flows through a visual editor designed according to user interface principles that standby to support every aspect of making PostgreSQL learn-able by self-operation and creation of queries within the visual editor. This paper tends to scrutinize the importance of choosing PostgreSQL as the working database environment, the visual perspectives that influence human behaviour and ultimately learning, the modes in which learn-ability can be provided via visualization and the advantages reaped by the implementation of the proposed system features.

Keywords: database, learn-ability, PostgreSQL, query, visual-editor

Procedia PDF Downloads 150
2518 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey

Procedia PDF Downloads 114
2517 Antimicrobial, Antioxidant and Cytotoxicity Properties of Some Selected Wild Edible Fruits Used Traditionally as a Source of Food

Authors: Thilivhali Emmanuel Tshikalange, Darky Cheron Modishane, Frederick Tawi Tabit

Abstract:

The fruit pulp extracts of twelve selected ethnobotanical wild edible fruits from Mutale local municipality in Venda (Limpopo Province, South Africa) were investigated for their antimicrobial, antioxidant and cytotoxicity activities. Methanol extracts were prepared and tested against six micro-organisms (Salmonella typhi, Streptococcus pyogenes, Bacillus cereus, Klebsiella pneumoniae, Prevotella intermedia and Candida albicans). The minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) were determined using the micro-dilution method, while for antioxidant activity the 2,2-diphenyl-1-picrylhydrazyl method was used. Of the 12 extracts tested, Adonsonia digitata, Berchemia discolor, Manilkara mochisia, Xanthocercis zambesiaca, Landolphia kirkii and Garcinia livingstonei showed antimicrobial activity, with MIC values ranging from 12.5 to 0.4 mg/ml. Gram negative bacteria were more resistant to the extracts in comparison to Gram positive bacteria. Antioxidant activity was only detected in Adonsonia digitata extract and the IC50 (substrate concentration to produce 50% reduction) was found to be 16.18µg/ml. The cytotoxicity of the extracts that showed antimicrobial and antioxidant activities was also determined. All plant extracts tested were non-toxic against human kidney cells (HEK293), with IC50 values of >400 µg/ml. The results presented in this study provide support to some traditional uses of wild edible fruits.

Keywords: antimicrobial, antioxidant, cytotoxicity, ethnobotanical, fruits

Procedia PDF Downloads 361
2516 Medical and Surgical Nursing Care

Authors: Nassim Salmi

Abstract:

Postoperative mobilization is an important part of fundamental care. Increased mobilization has a positive effect on recovery, but immobilization is still a challenge in postoperative care. Aims: To report how the establishment of a national nursing database was used to measure postoperative mobilization in patients undergoing surgery for ovarian cancer. Mobilization was defined as at least 3 hours out of bed on postoperative day 1, with the goal set at achieving this in 60% of patients. Clinical nurses on 4400 patients with ovarian cancer performed data entry. Findings: 46.7% of patients met the goal for mobilization on the first postoperative day, but variations in duration and type of mobilization were observed. Of those mobilized, 51.8% had been walking in the hallway. A national nursing database creates opportunities to optimize fundamental care. By comparing nursing data with oncological, surgical, and pathology data, it became possible to study mobilization in relation to cancer stage, comorbidity, treatment, and extent of surgery.

Keywords: postoperative care, gynecology, nursing documentation, database

Procedia PDF Downloads 83
2515 Indian Women’s Inner-World and Female Protest in Githa Hariharan's Novel ‘The Thousand Faces of Night’

Authors: Hanaa Sameen Ameen Bajilan

Abstract:

Gender statuses are inherently unequal; it is difficult to establish equality between men and women in the light of traditional inequalities across the world. This research focuses on the similarities and differences among women from different generations different kinds of educational backgrounds and highlights the conflict experiences of the characters in Githa Hariharan's novel ‘The Thousand Faces of Night’ The purpose is to show how women are suffering and are being humiliated in a male-dominated society. The paper depicts how women in India grapple from male domination aggressiveness as well as the cultural, social and religious controlling in the society they live in. The paper also seeks to explore the importance of Knowledge as a powerful component that produces positive effects at the level of desire. The paper is based on the theories of Simone Beauvoir, Pierre Bourdieu, Edward Said, Rene Descartes and Amy Bhatt. Finally, the paper emphasizes on survival against hegemonic regimes and Indian women's hope for a better life.

Keywords: equality, gender, Githa Hariharan, humiliation

Procedia PDF Downloads 122
2514 Synthesis of Novel Uracil Non-nucleosides Analogues of the Reverse Transcriptase Inhibitors Emivirine and TNK-651

Authors: Nasser R. El-Brollosy, Roberta Loddo

Abstract:

6-Benzyl-1-(ethoxymethyl)-5-isopropyluracil (Emivirine) and its corresponding 1-benzyloxymethyl analogue (TNK-651) showed high activity against HIV-1. The present study describes synthesis of novel emivirine analogues by reaction of chloromethyl ethyl ether with uracils having 5-ethyl / isopropyl and 6-(3,5-dimethoxybenzyl) substituents. A series of new TNK-651 analogues substituted at N-1 with phenoxyethoxymethyl moiety was prepared on treatment of the corresponding uracils with bis(phenoxyethoxy) methane. The newly synthesized non-nucleosides were tested for biological activity against wild type HIV-1 IIIB as well as the resistant strains N119 (Y181C), A17 (K103N + Y181C), and the triple mutant EFVR (K103R + V179D + P225H) in MT-4 cells. Some of the tested compounds showed good activities. Among them 6-(3,5-dimethylbenzyl)-5-ethyl-1-[2-(phenoxyethyl) oxymethyl]uracil which showed inhibitory potency higher than emivirine against both wild type HIV-1 and the tested mutant strains.

Keywords: Emivirine, HIV, non-nucleoside reverse transcriptase, uracils

Procedia PDF Downloads 233
2513 Using Deep Learning in Lyme Disease Diagnosis

Authors: Teja Koduru

Abstract:

Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease.

Keywords: Lyme, untreated Lyme, erythema migrans rash, EM rash

Procedia PDF Downloads 204
2512 A Comparative Analysis on QRS Peak Detection Using BIOPAC and MATLAB Software

Authors: Chandra Mukherjee

Abstract:

The present paper is a representation of the work done in the field of ECG signal analysis using MATLAB 7.1 Platform. An accurate and simple ECG feature extraction algorithm is presented in this paper and developed algorithm is validated using BIOPAC software. To detect the QRS peak, ECG signal is processed by following mentioned stages- First Derivative, Second Derivative and then squaring of that second derivative. Efficiency of developed algorithm is tested on ECG samples from different database and real time ECG signals acquired using BIOPAC system. Firstly we have lead wise specified threshold value the samples above that value is marked and in the original signal, where these marked samples face change of slope are spotted as R-peak. On the left and right side of the R-peak, faces change of slope identified as Q and S peak, respectively. Now the inbuilt Detection algorithm of BIOPAC software is performed on same output sample and both outputs are compared. ECG baseline modulation correction is done after detecting characteristics points. The efficiency of the algorithm is tested using some validation parameters like Sensitivity, Positive Predictivity and we got satisfied value of these parameters.

Keywords: first derivative, variable threshold, slope reversal, baseline modulation correction

Procedia PDF Downloads 379
2511 Jan’s Life-History: Changing Faces of Managerial Masculinities and Consequences for Health

Authors: Susanne Gustafsson

Abstract:

Life-history research is an extraordinarily fruitful method to use for social analysis and gendered health analysis in particular. Its potential is illustrated through a case study drawn from a Swedish project. It reveals an old type of masculinity that faces difficulties when carrying out two sets of demands simultaneously, as a worker/manager and as a father/husband. The paper illuminates the historical transformation of masculinity and the consequences of this for health. We draw on the idea of the “changing faces of masculinity” to explore the dynamism and complexity of gendered health. An empirical case is used for its illustrative abilities. Jan, a middle-level manager and father employed in the energy sector in urban Sweden is the subject of this paper. Jan’s story is one of 32 semi-structured interviews included in an extended study focusing on well-being at work. The results reveal a face of masculinity conceived of in middle-level management as tacitly linked to the neoliberal doctrine. Over a couple of decades, the idea of “flexibility” was turned into a valuable characteristic that everyone was supposed to strive for. This resulted in increased workloads. Quite a few employees, and managers, in particular, find themselves working both day and night. This may explain why not having enough time to spend with children and family members is a recurring theme in the data. Can this way of doing be linked to masculinity and health? The first author’s research has revealed that the use of gender in health science is not sufficiently or critically questioned. This lack of critical questioning is a serious problem, especially since ways of doing gender affect health. We suggest that gender reproduction and gender transformation are interconnected, regardless of how they affect health. They are recognized as two sides of the same phenomenon, and minor movements in one direction or the other become crucial for understanding its relation to health. More or less, at the same time, as Jan’s masculinity was reproduced in response to workplace practices, Jan’s family position was transformed—not totally but by a degree or two, and these degrees became significant for the family’s health and well-being. By moving back and forth between varied events in Jan’s biographical history and his sociohistorical life span, it becomes possible to show that in a time of gender transformations, power relations can be renegotiated, leading to consequences for health.

Keywords: changing faces of masculinity, gendered health, life-history research method, subverter

Procedia PDF Downloads 90
2510 Creating Database and Building 3D Geological Models: A Case Study on Bac Ai Pumped Storage Hydropower Project

Authors: Nguyen Chi Quang, Nguyen Duong Tri Nguyen

Abstract:

This article is the first step to research and outline the structure of the geotechnical database in the geological survey of a power project; in the context of this report creating the database that has been carried out for the Bac Ai pumped storage hydropower project. For the purpose of providing a method of organizing and storing geological and topographic survey data and experimental results in a spatial database, the RockWorks software is used to bring optimal efficiency in the process of exploiting, using, and analyzing data in service of the design work in the power engineering consulting. Three-dimensional (3D) geotechnical models are created from the survey data: such as stratigraphy, lithology, porosity, etc. The results of the 3D geotechnical model in the case of Bac Ai pumped storage hydropower project include six closely stacked stratigraphic formations by Horizons method, whereas modeling of engineering geological parameters is performed by geostatistical methods. The accuracy and reliability assessments are tested through error statistics, empirical evaluation, and expert methods. The three-dimensional model analysis allows better visualization of volumetric calculations, excavation and backfilling of the lake area, tunneling of power pipelines, and calculation of on-site construction material reserves. In general, the application of engineering geological modeling makes the design work more intuitive and comprehensive, helping construction designers better identify and offer the most optimal design solutions for the project. The database always ensures the update and synchronization, as well as enables 3D modeling of geological and topographic data to integrate with the designed data according to the building information modeling. This is also the base platform for BIM & GIS integration.

Keywords: database, engineering geology, 3D Model, RockWorks, Bac Ai pumped storage hydropower project

Procedia PDF Downloads 134
2509 Development of an Asset Database to Enhance the Circular Business Models for the European Solar Industry: A Design Science Research Approach

Authors: Ässia Boukhatmi, Roger Nyffenegger

Abstract:

The expansion of solar energy as a means to address the climate crisis is undisputed, but the increasing number of new photovoltaic (PV) modules being put on the market is simultaneously leading to increased challenges in terms of managing the growing waste stream. Many of the discarded modules are still fully functional but are often damaged by improper handling after disassembly or not properly tested to be considered for a second life. In addition, the collection rate for dismantled PV modules in several European countries is only a fraction of previous projections, partly due to the increased number of illegal exports. The underlying problem for those market imperfections is an insufficient data exchange between the different actors along the PV value chain, as well as the limited traceability of PV panels during their lifetime. As part of the Horizon 2020 project CIRCUSOL, an asset database prototype was developed to tackle the described problems. In an iterative process applying the design science research methodology, different business models, as well as the technical implementation of the database, were established and evaluated. To explore the requirements of different stakeholders for the development of the database, surveys and in-depth interviews were conducted with various representatives of the solar industry. The proposed database prototype maps the entire value chain of PV modules, beginning with the digital product passport, which provides information about materials and components contained in every module. Product-related information can then be expanded with performance data of existing installations. This information forms the basis for the application of data analysis methods to forecast the appropriate end-of-life strategy, as well as the circular economy potential of PV modules, already before they arrive at the recycling facility. The database prototype could already be enriched with data from different data sources along the value chain. From a business model perspective, the database offers opportunities both in the area of reuse as well as with regard to the certification of sustainable modules. Here, participating actors have the opportunity to differentiate their business and exploit new revenue streams. Future research can apply this approach to further industry and product sectors, validate the database prototype in a practical context, and can serve as a basis for standardization efforts to strengthen the circular economy.

Keywords: business model, circular economy, database, design science research, solar industry

Procedia PDF Downloads 80
2508 A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System

Authors: J. K. Adedeji, M. O. Oyekanmi

Abstract:

This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.

Keywords: biometric characters, facial recognition, neural network, OpenCV

Procedia PDF Downloads 228
2507 Investigating Real Ship Accidents with Descriptive Analysis in Turkey

Authors: İsmail Karaca, Ömer Söner

Abstract:

The use of advanced methods has been increasing day by day in the maritime sector, which is one of the sectors least affected by the COVID-19 pandemic. It is aimed to minimize accidents, especially by using advanced methods in the investigation of marine accidents. This research aimed to conduct an exploratory statistical analysis of particular ship accidents in the Transport Safety Investigation Center of Turkey database. 46 ship accidents, which occurred between 2010-2018, have been selected from the database. In addition to the availability of a reliable and comprehensive database, taking advantage of the robust statistical models for investigation is critical to improving the safety of ships. Thus, descriptive analysis has been used in the research to identify causes and conditional factors related to different types of ship accidents. The research outcomes underline the fact that environmental factors and day and night ratio have great influence on ship safety.

Keywords: descriptive analysis, maritime industry, maritime safety, ship accident statistics

Procedia PDF Downloads 117
2506 Comparison between RILM, JSTOR, and WorldCat Used to Search for Secondary Literature

Authors: Stacy Jarvis

Abstract:

Databases such as JSTOR, RILM and WorldCat have been the main source and storage of literature in the music orb. The Reference Index to Music Literature is a bibliographic database of over 2.6 million citations to writings about music from over 70 countries. The Research Institute produces RILM for the Study of Music at the University of Buffalo. JSTOR is an e-library of academic journals, books, and primary sources. Database JSTOR helps scholars find, utilise, and build upon a vast range of literature through a powerful teaching and research platform. Another database, WorldCat, is the world's biggest library catalogue, assisting scholars in finding library materials online. An evaluation of these databases in the music sphere is conducted by looking into the description and intended use and finding similarities and differences among them. Through comparison, it is found that these aim to serve different purposes, though they have the same goal of providing and storing literature. Also, since each database has different parts of literature that it majors on, the intended use of the three databases is evaluated. This can be found in the description, scope, and intended uses section. These areas are crucial to the research as it addresses the functional or literature differences among the three databases. It is also found that these databases have different quantitative potentials. This is determined by addressing the year each database began collecting literature and the number of articles, periodicals, albums, conference proceedings, music, dissertations, digital media, essays collections, journal articles, monographs, online resources, reviews, and reference materials that can be found in each one of them. This can be found in the sections- description, scope and intended uses and the importance of the database in identifying literature on different topics. To compare the delivery of services to the users, the importance of databases in identifying literature on different topics is also addressed in the section -the importance of databases in identifying literature on different topics. Even though these databases are used in research, they all have disadvantages and advantages. This is addressed in the sections on advantages and disadvantages. This will be significant in determining which of the three is the best. Also, it will help address how the shortcomings of one database can be addressed by utilising two databases together while conducting research. It is addressed in the section- a combination of RILM and JSTOR. All this information revolves around the idea that a huge amount of quantitative and qualitative data can be found in the presented databases on music and digital content; however, each of the given databases has a different construction and material features contributing to the musical scholarship in its way.

Keywords: RILM, JSTOR, WorldCat, database, literature, research

Procedia PDF Downloads 60
2505 Evaluation of Real Time PCR Methods for Food Safety

Authors: Ergun Sakalar, Kubra Bilgic

Abstract:

In the last decades, real-time PCR has become a reliable tool preferred to use in many laboratories for pathogen detection. This technique allows for monitoring target amplification via fluorescent molecules besides admit of quantitative analysis by enabling of convert outcomes of thermal cycling to digital data. Sensitivity and traceability of real-time PCR are based on measuring of fluorescence that appears only when fluorescent reporter dye bound to specific target DNA.The fluorescent reporter systems developed for this purpose are divided into two groups. The first group consists of intercalator fluorescence dyes such as SYBR Green, EvaGreen which binds to double-stranded DNA. On the other hand, the second group includes fluorophore-labeled oligonucleotide probes that are separated into three subgroups due to differences in mechanism of action; initial primer-probes such as Cyclicons, Angler®, Amplifluor®, LUX™, Scorpions, and the second one hydrolysis probes like TaqMan, Snake assay, finally hybridization probes, for instance, Molecular Beacons, Hybprobe/FRET, HyBeacon™, MGB-Eclipse, ResonSense®, Yin-Yang, MGB-Pleiades. In addition nucleic acid analogues, an increase of probe affinity to target site is also employed with fluorescence-labeled probes. Consequently, abundant real-time PCR detection chemistries are chosen by researcher according to the field of application, mechanism of action, advantages, and proper structures of primer/probes.

Keywords: fluorescent dye, food safety, molecular probes, nucleic acid analogues

Procedia PDF Downloads 218
2504 An Optimized Association Rule Mining Algorithm

Authors: Archana Singh, Jyoti Agarwal, Ajay Rana

Abstract:

Data Mining is an efficient technology to discover patterns in large databases. Association Rule Mining techniques are used to find the correlation between the various item sets in a database, and this co-relation between various item sets are used in decision making and pattern analysis. In recent years, the problem of finding association rules from large datasets has been proposed by many researchers. Various research papers on association rule mining (ARM) are studied and analyzed first to understand the existing algorithms. Apriori algorithm is the basic ARM algorithm, but it requires so many database scans. In DIC algorithm, less amount of database scan is needed but complex data structure lattice is used. The main focus of this paper is to propose a new optimized algorithm (Friendly Algorithm) and compare its performance with the existing algorithms A data set is used to find out frequent itemsets and association rules with the help of existing and proposed (Friendly Algorithm) and it has been observed that the proposed algorithm also finds all the frequent itemsets and essential association rules from databases as compared to existing algorithms in less amount of database scan. In the proposed algorithm, an optimized data structure is used i.e. Graph and Adjacency Matrix.

Keywords: association rules, data mining, dynamic item set counting, FP-growth, friendly algorithm, graph

Procedia PDF Downloads 389
2503 Computational Investigation of V599 Mutations of BRAF Protein and Its Control over the Therapeutic Outcome under the Malignant Condition

Authors: Mayank, Navneet Kaur, Narinder Singh

Abstract:

The V599 mutations in the BRAF protein are extremely oncogenic, responsible for countless of malignant conditions. Along with wild type, V599E, V599D, and V599R are the important mutated variants of the BRAF proteins. The BRAF inhibitory anticancer agents are continuously developing, and sorafenib is a BRAF inhibitor that is under clinical use. The crystal structure of sorafenib bounded to wild type, and V599 is known, showing a similar interaction pattern in both the case. The mutated 599th residue, in both the case, is also found not interacting directly with the co-crystallized sorafenib molecule. However, the IC50 value of sorafenib was found extremely different in both the case, i.e., 22 nmol/L for wild and 38 nmol/L for V599E protein. Molecular docking study and MMGBSA binding energy results also revealed a significant difference in the binding pattern of sorafenib in both the case. Therefore, to explore the role of distinctively situated 599th residue, we have further conducted comprehensive computational studies. The molecular dynamics simulation, residue interaction network (RIN) analysis, and residue correlation study results revealed the importance of the 599th residue on the therapeutic outcome and overall dynamic of the BRAF protein. Therefore, although the position of 599th residue is very much distinctive from the ligand-binding cavity of BRAF, still it has exceptional control over the overall functional outcome of the protein. The insight obtained here may seem extremely important and guide us while designing ideal BRAF inhibitory anticancer molecules.

Keywords: BRAF, oncogenic, sorafenib, computational studies

Procedia PDF Downloads 93
2502 Standard Languages for Creating a Database to Display Financial Statements on a Web Application

Authors: Vladimir Simovic, Matija Varga, Predrag Oreski

Abstract:

XHTML and XBRL are the standard languages for creating a database for the purpose of displaying financial statements on web applications. Today, XBRL is one of the most popular languages for business reporting. A large number of countries in the world recognize the role of XBRL language for financial reporting and the benefits that the reporting format provides in the collection, analysis, preparation, publication and the exchange of data (information) which is the positive side of this language. Here we present all advantages and opportunities that a company may have by using the XBRL format for business reporting. Also, this paper presents XBRL and other languages that are used for creating the database, such XML, XHTML, etc. The role of the AJAX complex model and technology will be explained in detail, and during the exchange of financial data between the web client and web server. Here will be mentioned basic layers of the network for data exchange via the web.

Keywords: XHTML, XBRL, XML, JavaScript, AJAX technology, data exchange

Procedia PDF Downloads 369
2501 Morphological Features Fusion for Identifying INBREAST-Database Masses Using Neural Networks and Support Vector Machines

Authors: Nadia el Atlas, Mohammed el Aroussi, Mohammed Wahbi

Abstract:

In this paper a novel technique of mass characterization based on robust features-fusion is presented. The proposed method consists of mainly four stages: (a) the first phase involves segmenting the masses using edge information’s. (b) The second phase is to calculate and fuse the most relevant morphological features. (c) The last phase is the classification step which allows us to classify the images into benign and malignant masses. In this step we have implemented Support Vectors Machines (SVM) and Artificial Neural Networks (ANN), which were evaluated with the following performance criteria: confusion matrix, accuracy, sensitivity, specificity, receiver operating characteristic ROC, and error histogram. The effectiveness of this new approach was evaluated by a recently developed database: INBREAST database. The fusion of the most appropriate morphological features provided very good results. The SVM gives accuracy to within 64.3%. Whereas the ANN classifier gives better results with an accuracy of 97.5%.

Keywords: breast cancer, mammography, CAD system, features, fusion

Procedia PDF Downloads 567
2500 Using Priority Order of Basic Features for Circumscribed Masses Detection in Mammograms

Authors: Minh Dong Le, Viet Dung Nguyen, Do Huu Viet, Nguyen Huu Tu

Abstract:

In this paper, we present a new method for circumscribed masses detection in mammograms. Our method is evaluated on 23 mammographic images of circumscribed masses and 20 normal mammograms from public Mini-MIAS database. The method is quite sanguine with sensitivity (SE) of 95% with only about 1 false positive per image (FPpI). To achieve above results we carry out a progression following: Firstly, the input images are preprocessed with the aim to enhance key information of circumscribed masses; Next, we calculate and evaluate statistically basic features of abnormal regions on training database; Then, mammograms on testing database are divided into equal blocks which calculated corresponding features. Finally, using priority order of basic features to classify blocks as an abnormal or normal regions.

Keywords: mammograms, circumscribed masses, evaluated statistically, priority order of basic features

Procedia PDF Downloads 304
2499 Ferromagnetic Potts Models with Multi Site Interaction

Authors: Nir Schreiber, Reuven Cohen, Simi Haber

Abstract:

The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.

Keywords: entropic sampling, lattice animals, phase transitions, Potts model

Procedia PDF Downloads 136
2498 3D-Vehicle Associated Research Fields for Smart City via Semantic Search Approach

Authors: Haluk Eren, Mucahit Karaduman

Abstract:

This paper presents 15-year trends for scientific studies in a scientific database considering 3D and vehicle words. Two words are selected to find their associated publications in IEEE scholar database. Both of keywords are entered individually for the years 2002, 2012, and 2016 on the database to identify the preferred subjects of researchers in same years. We have classified closer research fields after searching and listing. Three years (2002, 2012, and 2016) have been investigated to figure out progress in specified time intervals. The first one is assumed as the initial progress in between 2002-2012, and the second one is in 2012-2016 that is fast development duration. We have found very interesting and beneficial results to understand the scholars’ research field preferences for a decade. This information will be highly desirable in smart city-based research purposes consisting of 3D and vehicle-related issues.

Keywords: Vehicle, three-dimensional, smart city, scholarly search, semantic

Procedia PDF Downloads 294
2497 Utilising an Online Data Collection Platform for the Development of a Community Engagement Database: A Case Study on Building Inter-Institutional Partnerships at UWC

Authors: P. Daniels, T. Adonis, P. September-Brown, R. Comalie

Abstract:

The community engagement unit at the University of the Western Cape was tasked with establishing a community engagement database. The database would store information of all community engagement projects related to the university. The wealth of knowledge obtained from the various disciplines would be used to facilitate interdisciplinary collaboration within the university, as well as facilitating community university partnership opportunities. The purpose of this qualitative study was to explore electronic data collection through the development of a database. Two types of electronic data collection platforms were used, namely online questionnaire and email. The semi structured questionnaire was used to collect data related to community engagement projects from different faculties and departments at the university. There are many benefits for using an electronic data collection platform, such as reduction of costs and time, ease in reaching large numbers of potential respondents, and the possibility of providing anonymity to participants. Despite all the advantages of using the electronic platform, there were as many challenges, as depicted in our findings. The findings suggest that certain barriers existed by using an electronic platform for data collection, even though it was in an academic environment, where knowledge and resources were in abundance. One of the challenges experienced in this process was the lack of dissemination of information via email to staff within faculties. The actual online software used for the questionnaire had its own limitations, such as only being able to access the questionnaire from the same electronic device. In a few cases, academics only completed the questionnaire after a telephonic prompt or face to face meeting about "Is higher education in South Africa ready to embrace electronic platform in data collection?"

Keywords: community engagement, database, data collection, electronic platform, electronic tools, knowledge sharing, university

Procedia PDF Downloads 235
2496 Optimizing Availability of Marine Knowledge Repository with Cloud-Based Framework

Authors: Ahmad S. Mohd Noor, Emma A. Sirajudin, Nur F. Mat Zain

Abstract:

Reliability is an important property for knowledge repository system. National Marine Bioinformatics System or NABTICS is a marine knowledge repository portal aimed to provide a baseline for marine biodiversity and a tool for researchers and developers. It is intended to be a large and growing online database and also a metadata system for inputs of research analysis. The trends of present large distributed systems such as Cloud computing are the delivery of computing as a service rather than a product. The goal of this research is to make NABTICS a system of greater availability by integrating it with Cloud based Neighbor Replication and Failure Recovery (NRFR). This can be achieved by implementation of NABTICS into distributed environment. As a result, the user can experience minimum downtime while using the system should the server is having a failure. Consequently the online database application is said to be highly available.

Keywords: cloud, availability, distributed system, marine repository, database replication

Procedia PDF Downloads 443
2495 Designing a Model for Preparing Reports on the Automatic Earned Value Management Progress by the Integration of Primavera P6, SQL Database, and Power BI: A Case Study of a Six-Storey Concrete Building in Mashhad, Iran

Authors: Hamed Zolfaghari, Mojtaba Kord

Abstract:

Project planners and controllers are frequently faced with the challenge of inadequate software for the preparation of automatic project progress reports based on actual project information updates. They usually make dashboards in Microsoft Excel, which is local and not applicable online. Another shortcoming is that it is not linked to planning software such as Microsoft Project, which lacks the database required for data storage. This study aimed to propose a model for the preparation of reports on automatic online project progress based on actual project information updates by the integration of Primavera P6, SQL database, and Power BI for a construction project. The designed model could be applicable to project planners and controller agents by enabling them to prepare project reports automatically and immediately after updating the project schedule using actual information. To develop the model, the data were entered into P6, and the information was stored on the SQL database. The proposed model could prepare a wide range of reports, such as earned value management, HR reports, and financial, physical, and risk reports automatically on the Power BI application. Furthermore, the reports could be published and shared online.

Keywords: primavera P6, SQL, Power BI, EVM, integration management

Procedia PDF Downloads 66
2494 Literature Review on the Controversies and Changes in the Insanity Defense since the Wild Beast Standard in 1723 until the Federal Insanity Defense Reform Act of 1984

Authors: Jane E. Hill

Abstract:

Many variables led to the changes in the insanity defense since the Wild Beast Standard of 1723 until the Federal Insanity Defense Reform Act of 1984. The insanity defense is used in criminal trials and argued that the defendant is ‘not guilty by reason of insanity’ because the individual was unable to distinguish right from wrong during the time they were breaking the law. The issue that surrounds whether or not to use the insanity defense in the criminal court depends on the mental state of the defendant at the time the criminal act was committed. This leads us to the question of did the defendant know right from wrong when they broke the law? In 1723, The Wild Beast Test stated that to be exempted from punishment the individual is totally deprived of their understanding and memory and doth not know what they are doing. The Wild Beast Test became the standard in England for over seventy-five years. In 1800, James Hadfield attempted to assassinate King George III. He only made the attempt because he was having delusional beliefs. The jury and the judge gave a verdict of not guilty. However, to legal confine him; the Criminal Lunatics Act was enacted. Individuals that were deemed as ‘criminal lunatics’ and were given a verdict of not guilty would be taken into custody and not be freed into society. In 1843, the M'Naghten test required that the individual did not know the quality or the wrongfulness of the offense at the time they committed the criminal act(s). Daniel M'Naghten was acquitted on grounds of insanity. The M'Naghten Test is still a modern concept of the insanity defense used in many courts today. The Irresistible Impulse Test was enacted in the United States in 1887. The Irresistible Impulse Test suggested that offenders that could not control their behavior while they were committing a criminal act were not deterrable by the criminal sanctions in place; therefore no purpose would be served by convicting the offender. Due to the criticisms of the latter two contentions, the federal District of Columbia Court of Appeals ruled in 1954 to adopt the ‘product test’ by Sir Isaac Ray for insanity. The Durham Rule also known as the ‘product test’, stated an individual is not criminally responsible if the unlawful act was the product of mental disease or defect. Therefore, the two questions that need to be asked and answered are (1) did the individual have a mental disease or defect at the time they broke the law? and (2) was the criminal act the product of their disease or defect? The Durham courts failed to clearly define ‘mental disease’ or ‘product.’ Therefore, trial courts had difficulty defining the meaning of the terms and the controversy continued until 1972 when the Durham rule was overturned in most places. Therefore, the American Law Institute combined the M'Naghten test with the irresistible impulse test and The United States Congress adopted an insanity test for the federal courts in 1984.

Keywords: insanity defense, psychology law, The Federal Insanity Defense Reform Act of 1984, The Wild Beast Standard in 1723

Procedia PDF Downloads 115
2493 Oil Contents, Mineral Compositions, and Their Correlations in Wild and Cultivated Safflower Seeds

Authors: Rahim Ada, Mustafa Harmankaya, Sadiye Ayse Celik

Abstract:

The safflower seed contains about 25-40% solvent extract and 20-33% fiber. It is well known that dietary phospholipids lower serum cholesterol levels effectively. The nutrient composition of safflower seed changes depending on region, soil and genotypes. This research was made by using of six natural selected (A22, A29, A30, C12, E1, F4, G8, G12, J27) and three commercial (Remzibey, Dincer, Black Sun1) varieties of safflower genotypes. The research was conducted on field conditions for two years (2009 and 2010) in randomized complete block design with three replications in Konya-Turkey ecological conditions. Oil contents, mineral contents and their correlations were determined in the research. According to the results, oil content was ranged from 22.38% to 34.26%, while the minerals were in between the following values: 1469, 04-2068.07 mg kg-1 for Ca, 7.24-11.71 mg kg-1 for B, 13.29-17.41 mg kg-1 for Cu, 51.00-79.35 mg kg-1 for Fe, 3988-6638.34 mg kg-1 for K, 1418.61-2306.06 mg kg-1 for Mg, 11.37-17.76 mg kg-1 for Mn, 4172.33-7059.58 mg kg-1 for P and 32.60-59.00 mg kg-1 for Zn. Correlation analysis that was made separately for the commercial varieties and wild lines showed that high level of oil content was negatively affected by all the investigated minerals except for K and Zn in the commercial varieties.

Keywords: safflower, oil, quality, mineral content

Procedia PDF Downloads 243
2492 New Approach for Constructing a Secure Biometric Database

Authors: A. Kebbeb, M. Mostefai, F. Benmerzoug, Y. Chahir

Abstract:

The multimodal biometric identification is the combination of several biometric systems. The challenge of this combination is to reduce some limitations of systems based on a single modality while significantly improving performance. In this paper, we propose a new approach to the construction and the protection of a multimodal biometric database dedicated to an identification system. We use a topological watermarking to hide the relation between face image and the registered descriptors extracted from other modalities of the same person for more secure user identification.

Keywords: biometric databases, multimodal biometrics, security authentication, digital watermarking

Procedia PDF Downloads 346