Search results for: violation data discovery
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24726

Search results for: violation data discovery

24516 The Discovery and Application of Perspective Representation in Modern Italy

Authors: Matthias Stange

Abstract:

In the early modern period, a different image of man began to prevail in Europe. The focus was on the self-determined human being and his abilities. At first, these developments could be seen in Italian painting and architecture, which again oriented itself to the concepts and forms of antiquity. For example, through the discovery of perspective representation by Brunelleschi or later the orthogonal projection by Alberti, after the ancient knowledge of optics had been forgotten in the Middle Ages. The understanding of reality in the Middle Ages was not focused on the sensually perceptible world but was determined by ecclesiastical dogmas. The empirical part of this study examines the rediscovery and development of perspective. With the paradigm of antiquity, the figure of the architect was also recognised again - the cultural man trained theoretically and practically in numerous subjects, as Vitruvius describes him. In this context, the role of the architect, the influence on the painting of the Quattrocento as well as the influence on architectural representation in the Baroque period are examined. Baroque is commonly associated with the idea of illusionistic appearance as opposed to the tangible reality presented in the Renaissance. The study has shown that the central perspective projection developed by Filippo Brunelleschi enabled another understanding of seeing and the dissemination of painted images. Brunelleschi's development made it possible to understand the sight of nature as a reflection of what is presented to the viewer's eye. Alberti later shortened Brunelleschi's central perspective representation for practical use in painting. In early modern Italian architecture and painting, these developments apparently supported each other. The pictorial representation of architecture initially served the development of an art form before it became established in building practice itself.

Keywords: Alberti, Brunelleschi, central perspective projection, orthogonal projection, quattrocento, baroque

Procedia PDF Downloads 55
24515 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model

Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis

Abstract:

Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).

Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry

Procedia PDF Downloads 186
24514 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning

Authors: Jiahao Tian, Michael D. Porter

Abstract:

Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.

Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation

Procedia PDF Downloads 44
24513 Evaluation of the Accuracy of a ‘Two Question Screening Tool’ in the Detection of Intimate Partner Violence in a Primary Healthcare Setting in South Africa

Authors: A. Saimen, E. Armstrong, C. Manitshana

Abstract:

Intimate partner violence (IPV) has been recognised as a global human rights violation. It is universally under diagnosed and the institution of timeous multi-faceted interventions has been noted to benefit IPV victims. Currently, the concept of using a screening tool to detect IPV has not been widely explored in a primary healthcare setting in South Africa, and it was for this reason that this study has been undertaken. A systematic random sampling of 1 in 8 women over a period of 3 months was conducted prospectively at the OPD of a Level 1 Hospital. Participants were asked about their experience of IPV during the past 12 months. The WAST-short, a two-question tool, was used to screen patients for IPV. To verify the result of the screening, women were also asked the remaining questions from the WAST. Data was collected from 400 participants, with a response rate of 99.3%. The prevalence of IPV in the sample was 32%. The WAST-short was shown to have the following operating characteristics: sensitivity 45.2%, specificity 98%,positive predictive value 98%, negative predictive value 79%. The WAST-short lacks sufficient sensitivity and therefore is not an ideal screening tool for this setting. Improvement in the sensitivity of the WAST-short in this setting may be achieved by lowering the threshold for a positive result for IPV screening, and modification of the screening questions to better reflect IPV as understood by the local population.

Keywords: domestic violence, intimate partner violence, screening, screening tools

Procedia PDF Downloads 282
24512 Development of Knowledge Discovery Based Interactive Decision Support System on Web Platform for Maternal and Child Health System Strengthening

Authors: Partha Saha, Uttam Kumar Banerjee

Abstract:

Maternal and Child Healthcare (MCH) has always been regarded as one of the important issues globally. Reduction of maternal and child mortality rates and increase of healthcare service coverage were declared as one of the targets in Millennium Development Goals till 2015 and thereafter as an important component of the Sustainable Development Goals. Over the last decade, worldwide MCH indicators have improved but could not match the expected levels. Progress of both maternal and child mortality rates have been monitored by several researchers. Each of the studies has stated that only less than 26% of low-income and middle income countries (LMICs) were on track to achieve targets as prescribed by MDG4. Average worldwide annual rate of reduction of under-five mortality rate and maternal mortality rate were 2.2% and 1.9% as on 2011 respectively whereas rates should be minimum 4.4% and 5.5% annually to achieve targets. In spite of having proven healthcare interventions for both mothers and children, those could not be scaled up to the required volume due to fragmented health systems, especially in the developing and under-developed countries. In this research, a knowledge discovery based interactive Decision Support System (DSS) has been developed on web platform which would assist healthcare policy makers to develop evidence-based policies. To achieve desirable results in MCH, efficient resource planning is very much required. In maximum LMICs, resources are big constraint. Knowledge, generated through this system, would help healthcare managers to develop strategic resource planning for combatting with issues like huge inequity and less coverage in MCH. This system would help healthcare managers to accomplish following four tasks. Those are a) comprehending region wise conditions of variables related with MCH, b) identifying relationships within variables, c) segmenting regions based on variables status, and d) finding out segment wise key influential variables which have major impact on healthcare indicators. Whole system development process has been divided into three phases. Those were i) identifying contemporary issues related with MCH services and policy making; ii) development of the system; and iii) verification and validation of the system. More than 90 variables under three categories, such as a) educational, social, and economic parameters; b) MCH interventions; and c) health system building blocks have been included into this web-based DSS and five separate modules have been developed under the system. First module has been designed for analysing current healthcare scenario. Second module would help healthcare managers to understand correlations among variables. Third module would reveal frequently-occurring incidents along with different MCH interventions. Fourth module would segment regions based on previously mentioned three categories and in fifth module, segment-wise key influential interventions will be identified. India has been considered as case study area in this research. Data of 601 districts of India has been used for inspecting effectiveness of those developed modules. This system has been developed by importing different statistical and data mining techniques on Web platform. Policy makers would be able to generate different scenarios from the system before drawing any inference, aided by its interactive capability.

Keywords: maternal and child heathcare, decision support systems, data mining techniques, low and middle income countries

Procedia PDF Downloads 226
24511 Mining Big Data in Telecommunications Industry: Challenges, Techniques, and Revenue Opportunity

Authors: Hoda A. Abdel Hafez

Abstract:

Mining big data represents a big challenge nowadays. Many types of research are concerned with mining massive amounts of data and big data streams. Mining big data faces a lot of challenges including scalability, speed, heterogeneity, accuracy, provenance and privacy. In telecommunication industry, mining big data is like a mining for gold; it represents a big opportunity and maximizing the revenue streams in this industry. This paper discusses the characteristics of big data (volume, variety, velocity and veracity), data mining techniques and tools for handling very large data sets, mining big data in telecommunication and the benefits and opportunities gained from them.

Keywords: mining big data, big data, machine learning, telecommunication

Procedia PDF Downloads 372
24510 Clustering-Based Computational Workload Minimization in Ontology Matching

Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris

Abstract:

In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.

Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching

Procedia PDF Downloads 224
24509 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ

Authors: Lalita, Niladri Sarkar, Subhasis Ghosh

Abstract:

Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.

Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal

Procedia PDF Downloads 34
24508 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 153
24507 Cross Analysis of Gender Discrimination in Print Media of Subcontinent via James Paul Gee Model

Authors: Luqman Shah

Abstract:

The myopic gender discrimination is now a well-documented and recognized fact. However, gender is only one facet of an individual’s multiple identities. The aim of this work is to investigate gender discrimination highlighted in print media in the subcontinent with a specific focus on Pakistan and India. In this study, an approach is adopted by using the James Paul Gee model for the identification of gender discrimination. As a matter of fact, gender discrimination is not consistent in its nature and intensity across global societies and varies as social, geographical, and cultural background change. The World has been changed enormously in every aspect of life, and there are also obvious changes towards gender discrimination, prejudices, and biases, but still, the world has a long way to go to recognize women as equal as men in every sphere of life. The history of the world is full of gender-based incidents and violence. Now the time came that this issue must be seriously addressed and to eradicate this evil, which will lead to harmonize society and consequently heading towards peace and prosperity. The study was carried out by a mixed model research method. The data was extracted from the contents of five Pakistani English newspapers out of a total of 23 daily English newspapers, and likewise, five Indian daily English newspapers out of 52 those were published 2018-2019. Two news stories from each of these newspapers, in total, twenty news stories were taken as sampling for this research. Content and semiotic analysis techniques were used to analyze through James Paul Gee's seven building tasks of language. The resources of renowned e-papers are utilized, and the highlighted cases in Pakistani newspapers of Indian gender-based stories and vice versa are scrutinized as per the requirement of this research paper. For analysis of the written stretches of discourse taken from e-papers and processing of data for the focused problem, James Paul Gee 'Seven Building Tasks of Language' is used. Tabulation of findings is carried to pinpoint the issue with certainty. Findings after processing the data showed that there is a gross human rights violation on the basis of gender discrimination. The print media needs a more realistic representation of what is what not what seems to be. The study recommends the equality and parity of genders.

Keywords: gender discrimination, print media, Paul Gee model, subcontinent

Procedia PDF Downloads 186
24506 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach

Authors: Theertha Chandroth

Abstract:

This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.

Keywords: XML, JSON, data comparison, integration testing, Python, SQL

Procedia PDF Downloads 94
24505 Pharmaceutical Science and Development in Drug Research

Authors: Adegoke Yinka Adebayo

Abstract:

An understanding of the critical product attributes that impact on in vivo performance is key to the production of safe and effective medicines. Thus, a key driver for our research is the development of new basic science and technology underpinning the development of new pharmaceutical products. Research includes the structure and properties of drugs and excipients, biopharmaceutical characterisation, pharmaceutical processing and technology and formulation and analysis.

Keywords: drug discovery, drug development, drug delivery

Procedia PDF Downloads 463
24504 Cell Line Screens Identify Biomarkers of Drug Sensitivity in GLIOMA Cancer

Authors: Noora Al Muftah, Reda Rawi, Richard Thompson, Halima Bensmail

Abstract:

Clinical responses to anticancer therapies are often restricted to a subset of patients. In some cases, mutated cancer genes are potent biomarkers of response to targeted agents. There is an urgent need to identify biomarkers that predict which patients with are most likely to respond to treatment. Systematic efforts to correlate tumor mutational data with biologic dependencies may facilitate the translation of somatic mutation catalogs into meaningful biomarkers for patient stratification. To identify genomic features associated with drug sensitivity and uncover new biomarkers of sensitivity and resistance to cancer therapeutics, we have screened and integrated a panel of several hundred cancer cell lines from different databases, mutation, DNA copy number, and gene expression data for hundreds of cell lines with their responses to targeted and cytotoxic therapies with drugs under clinical and preclinical investigation. We found mutated cancer genes were associated with cellular response to most currently available Glioma cancer drugs and some frequently mutated genes were associated with sensitivity to a broad range of therapeutic agents. By linking drug activity to the functional complexity of cancer genomes, systematic pharmacogenomic profiling in cancer cell lines provides a powerful biomarker discovery platform to guide rational cancer therapeutic strategies.

Keywords: cancer, gene network, Lasso, penalized regression, P-values, unbiased estimator

Procedia PDF Downloads 383
24503 Reviewing Privacy Preserving Distributed Data Mining

Authors: Sajjad Baghernezhad, Saeideh Baghernezhad

Abstract:

Nowadays considering human involved in increasing data development some methods such as data mining to extract science are unavoidable. One of the discussions of data mining is inherent distribution of the data usually the bases creating or receiving such data belong to corporate or non-corporate persons and do not give their information freely to others. Yet there is no guarantee to enable someone to mine special data without entering in the owner’s privacy. Sending data and then gathering them by each vertical or horizontal software depends on the type of their preserving type and also executed to improve data privacy. In this study it was attempted to compare comprehensively preserving data methods; also general methods such as random data, coding and strong and weak points of each one are examined.

Keywords: data mining, distributed data mining, privacy protection, privacy preserving

Procedia PDF Downloads 492
24502 The Right to Data Portability and Its Influence on the Development of Digital Services

Authors: Roman Bieda

Abstract:

The General Data Protection Regulation (GDPR) will come into force on 25 May 2018 which will create a new legal framework for the protection of personal data in the European Union. Article 20 of GDPR introduces a right to data portability. This right allows for data subjects to receive the personal data which they have provided to a data controller, in a structured, commonly used and machine-readable format, and to transmit this data to another data controller. The right to data portability, by facilitating transferring personal data between IT environments (e.g.: applications), will also facilitate changing the provider of services (e.g. changing a bank or a cloud computing service provider). Therefore, it will contribute to the development of competition and the digital market. The aim of this paper is to discuss the right to data portability and its influence on the development of new digital services.

Keywords: data portability, digital market, GDPR, personal data

Procedia PDF Downloads 445
24501 Comparison of Blockchain Ecosystem for Identity Management

Authors: K. S. Suganya, R. Nedunchezhian

Abstract:

In recent years, blockchain technology has been found to be the most significant discovery in this digital era, after the discovery of the Internet and Cloud Computing. Blockchain is a simple, distributed public ledger that contains all the user’s transaction details in a block. The global copy of the block is then shared among all its peer-peer network users after validation by the Blockchain miners. Once a block is validated and accepted, it cannot be altered by any users making it a trust-free transaction. It also resolves the problem of double-spending by using traditional cryptographic methods. Since the advent of bitcoin, blockchain has been the backbone for all its transactions. But in recent years, it has found its roots and uses in many fields like Smart Contracts, Smart City management, healthcare, etc. Identity management against digital identity theft has become a major concern among financial and other organizations. To solve this digital identity theft, blockchain technology can be employed with existing identity management systems, which maintain a distributed public ledger containing details of an individual’s identity containing information such as Digital birth certificates, Citizenship number, Bank details, voter details, driving license in the form of blocks verified on the blockchain becomes time-stamped, unforgeable and publicly visible for any legitimate users. The main challenge in using blockchain technology to prevent digital identity theft is ensuring the pseudo-anonymity and privacy of the users. This survey paper will exert to study the blockchain concepts, consensus protocols, and various blockchain-based Digital Identity Management systems with their research scope. This paper also discusses the role of Blockchain in COVID-19 pandemic management by self-sovereign identity and supply chain management.

Keywords: blockchain, consensus protocols, bitcoin, identity theft, digital identity management, pandemic, COVID-19, self-sovereign identity

Procedia PDF Downloads 98
24500 Cas9-Assisted Direct Cloning and Refactoring of a Silent Biosynthetic Gene Cluster

Authors: Peng Hou

Abstract:

Natural products produced from marine bacteria serve as an immense reservoir for anti-infective drugs and therapeutic agents. Nowadays, heterologous expression of gene clusters of interests has been widely adopted as an effective strategy for natural product discovery. Briefly, the heterologous expression flowchart would be: biosynthetic gene cluster identification, pathway construction and expression, and product detection. However, gene cluster capture using traditional Transformation-associated recombination (TAR) protocol is low-efficient (0.5% positive colony rate). To make things worse, most of these putative new natural products are only predicted by bioinformatics analysis such as antiSMASH, and their corresponding natural products biosynthetic pathways are either not expressed or expressed at very low levels under laboratory conditions. Those setbacks have inspired us to focus on seeking new technologies to efficiently edit and refractor of biosynthetic gene clusters. Recently, two cutting-edge techniques have attracted our attention - the CRISPR-Cas9 and Gibson Assembly. By now, we have tried to pretreat Brevibacillus laterosporus strain genomic DNA with CRISPR-Cas9 nucleases that specifically generated breaks near the gene cluster of interest. This trial resulted in an increase in the efficiency of gene cluster capture (9%). Moreover, using Gibson Assembly by adding/deleting certain operon and tailoring enzymes regardless of end compatibility, the silent construct (~80kb) has been successfully refactored into an active one, yielded a series of analogs expected. With the appearances of the novel molecular tools, we are confident to believe that development of a high throughput mature pipeline for DNA assembly, transformation, product isolation and identification would no longer be a daydream for marine natural product discovery.

Keywords: biosynthesis, CRISPR-Cas9, DNA assembly, refactor, TAR cloning

Procedia PDF Downloads 257
24499 Engaging Girls in 'Learn Science by Doing' as Strategy for Enhanced Learning Outcome at the Junior High School Level in Nigeria

Authors: Stella Y. Erinosho

Abstract:

In an attempt to impact on girls’ interest in science, an instructional package on ‘Learn Science by Doing (LSD)’ was developed to support science teachers in teaching integrated science at the junior secondary level in Nigeria. LSD provides an instructional framework aimed at actively engaging girls in beginners’ science through activities that are discovery-oriented and allow for experiential learning. The goal of this study was to show the impact of application of LSD on girls’ performance and interest in science. The major hypothesis that was tested in the study was that students would exhibit higher learning outcomes (achievement and attitude) in science as effect of exposure to LSD instructional package. A quasi-experimental design was adopted, incorporating four all-girls schools. Three of the schools (comprising six classes) were randomly designated as experimental and one as the control. The sample comprised 357 girls (275 experimental and 82 control) and nine science teachers drawn from the experimental schools. The questionnaire was designed to gather data on students’ background characteristics and their attitude toward science while the cognitive outcomes were measured using tests, both within a group and between groups, the girls who had exposure to LSD exhibited improved cognitive outcomes and more positive attitude towards science compared with those who had conventional teaching. The data are consistent with previous studies indicating that interactive learning activities increase student performance and interest.

Keywords: active learning, school science, teaching and learning, Nigeria

Procedia PDF Downloads 359
24498 Balance of Natural Resources to Manage Land Use Changes in Subosukawonosraten Area

Authors: Sri E. Wati, D. Roswidyatmoko, N. Maslahatun, Gunawan, Andhika B. Taji

Abstract:

Natural resource is the main sources to fulfill human needs. Its utilization must consider not only human prosperity but also sustainability. Balance of natural resources is a tool to manage natural wealth and to control land use change. This tool is needed to organize land use planning as stated on spatial plan in a certain region. Balance of natural resources can be calculated by comparing two-series of natural resource data obtained at different year. In this case, four years data period of land and forest were used (2010 and 2014). Land use data were acquired through satellite image interpretation and field checking. By means of GIS analysis, its result was then assessed with land use plan. It is intended to evaluate whether existing land use is suitable with land use plan. If it is improper, what kind of efforts and policies must be done to overcome the situation. Subosukawonosraten is rapid developed areas in Central Java Province. This region consists of seven regencies/cities which are Sukoharjo Regency, Boyolali Regency, Surakarta City, Karanganyar Regency, Wonogiri Regency, Sragen Regency, and Klaten Regency. This region is regarding to several former areas under Karasidenan Surakarta and their location is adjacent to Surakarta. Balance of forest resources show that width of forest area is not significantly changed. Some land uses within the area are slightly changed. Some rice field areas are converted into settlement (0.03%) whereas water bodies become vacant areas (0.09%). On the other hand, balance of land resources state that there are many land use changes in this region. Width area of rice field decreases 428 hectares and more than 50% of them have been transformed into settlement area and 11.21% is converted into buildings such as factories, hotels, and other infrastructures. It occurs mostly in Sragen, Sukoharjo, and Karanganyar Regency. The results illustrate that land use change in this region is mostly influenced by increasing of population number. Some agricultural lands have been converted into built-up area since demand of settlement, industrial area, and other infrastructures also increases. Unfortunately, recent utilization of more than a half of total area is not appropriate with land use plan declared in spatial planning document. It means, local government shall develop a strict regulation and law enforcement related to any violation in land use management.

Keywords: balance, forest, land, spatial plan

Procedia PDF Downloads 292
24497 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 86
24496 An Efficient Mitigation Plan to Encounter Various Vulnerabilities in Internet of Things Enterprises

Authors: Umesh Kumar Singh, Abhishek Raghuvanshi, Suyash Kumar Singh

Abstract:

As IoT networks gain popularity, they are more susceptible to security breaches. As a result, it is crucial to analyze the IoT platform as a whole from the standpoint of core security concepts. The Internet of Things relies heavily on wireless networks, which are well-known for being susceptible to a wide variety of attacks. This article provides an analysis of many techniques that may be used to identify vulnerabilities in the software and hardware associated with the Internet of Things (IoT). In the current investigation, an experimental setup is built with the assistance of server computers, client PCs, Internet of Things development boards, sensors, and cloud subscriptions. Through the use of network host scanning methods and vulnerability scanning tools, raw data relating to IoT-based applications and devices may be collected. Shodan is a tool that is used for scanning, and it is also used for effective vulnerability discovery in IoT devices as well as penetration testing. This article presents an efficient mitigation plan for encountering vulnerabilities in the Internet of Things.

Keywords: internet of things, security, privacy, vulnerability identification, mitigation plan

Procedia PDF Downloads 14
24495 Ottoman Archaeology in Kostence (Constanta, Romania): A Locality on the Periphery of the Ottoman World

Authors: Margareta Simina Stanc, Aurel Mototolea, Tiberiu Potarniche

Abstract:

The city of Constanta (former Köstence) is located in the Dobrogea region, on the west shore of the Black Sea. Between 1420-1878, Dobrogea was a possession of the Ottoman Empire. Archaeological researches starting with the second half of the 20th century revealed various traces of the Ottoman period in this region. Between 2016-2018, preventive archaeological research conducted in the perimeter of the old Ottoman city of Köstence led to the discovery of structures of habitation as well as of numerous artifacts of the Ottoman period (pottery, coins, buckles, etc.). This study uses the analysis of these new discoveries to complete the picture of daily life in the Ottoman period. In 2017, in the peninsular area of Constanta, preventive archaeological research began at a point in the former Ottoman area. In the range between the current ironing level and the -1.5m depth, the Ottoman period materials appeared constantly. It is worth noting the structure of a large building that has been repaired at least once but could not be fully investigated. In parallel to this wall, there was arranged a transversally arranged brick-lined drainage channel. The drainage channel is poured into a tank (hazna), filled with various vintage materials, but mainly gilded ceramics and iron objects. This type of hazna is commonly found in Constanta for the pre-modern and modern period due to the lack of a sewage system in the peninsular area. A similar structure, probably fountain, was discovered in 2016 in another part of the old city. An interesting piece is that of a cup (probably) Persians and a bowl belonging to Kütahya style, both of the 17th century, proof of commercial routes passing through Constanta during that period and indirectly confirming the documentary testimonies of the time. Also, can be mentioned the discovery, in the year 2016, on the occasion of underwater research carried out by specialists of the department of the Constanta Museum, at a depth of 15 meters, a Turkish oil lamp (17th - the beginning of the 18th century), among other objects of a sunken ship. The archaeological pieces, in a fragmentary or integral state, found in research campaigns 2016-2018, are undergoing processing or restoration, leaving out all the available information, and establishing exact analogies. These discoveries bring new data to the knowledge of daily life during the Ottoman administration in the former Köstence, a locality on the periphery of the Islamic world.

Keywords: habitation, material culture, Ottoman administration, Ottoman archaeology, periphery

Procedia PDF Downloads 104
24494 Optimal Planning of Dispatchable Distributed Generators for Power Loss Reduction in Unbalanced Distribution Networks

Authors: Mahmoud M. Othman, Y. G. Hegazy, A. Y. Abdelaziz

Abstract:

This paper proposes a novel heuristic algorithm that aims to determine the best size and location of distributed generators in unbalanced distribution networks. The proposed heuristic algorithm can deal with the planning cases where power loss is to be optimized without violating the system practical constraints. The distributed generation units in the proposed algorithm is modeled as voltage controlled node with the flexibility to be converted to constant power factor node in case of reactive power limit violation. The proposed algorithm is implemented in MATLAB and tested on the IEEE 37 -node feeder. The results obtained show the effectiveness of the proposed algorithm.

Keywords: distributed generation, heuristic approach, optimization, planning

Procedia PDF Downloads 496
24493 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector

Procedia PDF Downloads 302
24492 How to Use Big Data in Logistics Issues

Authors: Mehmet Akif Aslan, Mehmet Simsek, Eyup Sensoy

Abstract:

Big Data stands for today’s cutting-edge technology. As the technology becomes widespread, so does Data. Utilizing massive data sets enable companies to get competitive advantages over their adversaries. Out of many area of Big Data usage, logistics has significance role in both commercial sector and military. This paper lays out what big data is and how it is used in both military and commercial logistics.

Keywords: big data, logistics, operational efficiency, risk management

Procedia PDF Downloads 613
24491 Efficient Synthesis of Highly Functionalized Biologically Important Spirocarbocyclic Oxindoles via Hauser Annulation

Authors: Kanduru Lokesh, Venkitasamy Kesavan

Abstract:

The unique structural features of spiro-oxindoles with diverse biological activities have made them privileged structures in new drug discovery. The key structural characteristic of these compounds is the spiro ring fused at the C-3 position of the oxindole core with varied heterocyclic motifs. Structural diversification of heterocyclic scaffolds to synthesize new chemical entities as pharmaceuticals and agrochemicals is one of the important goals of synthetic organic chemists. Nitrogen and oxygen containing heterocycles are by far the most widely occurring privileged structures in medicinal chemistry. The structural complexity and distinct three-dimensional arrangement of functional groups of these privileged structures are generally responsible for their specificity against biological targets. Structurally diverse compound libraries have proved to be valuable assets for drug discovery against challenging biological targets. Thus, identifying a new combination of substituents at C-3 position on oxindole moiety is of great importance in drug discovery to improve the efficiency and efficacy of the drugs. The development of suitable methodology for the synthesis of spiro-oxindole compounds has attracted much interest often in response to the significant biological activity displayed by the both natural and synthetic compounds. So creating structural diversity of oxindole scaffolds is need of the decade and formidable challenge. A general way to improve synthetic efficiency and also to access diversified molecules is through the annulation reactions. Annulation reactions allow the formation of complex compounds starting from simple substrates in a single transformation consisting of several steps in an ecologically and economically favorable way. These observations motivated us to develop the annulation reaction protocol to enable the synthesis of a new class of spiro-oxindole motifs which in turn would enable the enhancement of molecular diversity. As part of our enduring interest in the development of novel, efficient synthetic strategies to enable the synthesis of biologically important oxindole fused spirocarbocyclic systems, We have developed an efficient methodology for the construction of highly functionalized spirocarbocyclic oxindoles through [4+2] annulation of phthalides via Hauser annulation. functionalized spirocarbocyclic oxindoles was accomplished for the first time in the literature using Hauser annulation strategy. The reaction between methyleneindolinones and arylsulfonylphthalides catalyzed by cesium carbonate led to the access of new class of biologically important spiro[indoline-3,2'-naphthalene] derivatives in very good yields. The synthetic utility of the annulated product was further demonstrated by fluorination Using NFSI as a fluorinating agent to furnish corresponding fluorinated product.

Keywords: Hauser-Kraus annulation, spiro carbocyclic oxindoles, oxindole-ester, fluoridation

Procedia PDF Downloads 179
24490 On an Approach for Rule Generation in Association Rule Mining

Authors: B. Chandra

Abstract:

In Association Rule Mining, much attention has been paid for developing algorithms for large (frequent/closed/maximal) itemsets but very little attention has been paid to improve the performance of rule generation algorithms. Rule generation is an important part of Association Rule Mining. In this paper, a novel approach named NARG (Association Rule using Antecedent Support) has been proposed for rule generation that uses memory resident data structure named FCET (Frequent Closed Enumeration Tree) to find frequent/closed itemsets. In addition, the computational speed of NARG is enhanced by giving importance to the rules that have lower antecedent support. Comparative performance evaluation of NARG with fast association rule mining algorithm for rule generation has been done on synthetic datasets and real life datasets (taken from UCI Machine Learning Repository). Performance analysis shows that NARG is computationally faster in comparison to the existing algorithms for rule generation.

Keywords: knowledge discovery, association rule mining, antecedent support, rule generation

Procedia PDF Downloads 296
24489 Developing Open-Air Museum: The Heritage Conservation Effort, Oriented to Geotourism Concept and Education

Authors: Rinaldi Ikhram, R. A. Julia Satriani

Abstract:

The discovery of historical objects in Indonesia, especially in the area around Bandung and Priangan zone in general, have been inventorized and recorded by Dutch geologists during the colonial time. Among artefacts such as axes made of chalcedony and quartzite; arrowheads, knives, shrivel, and drill bit all made from obsidian; grindstones, even bracelet from stones. Ceramic mold for smelting bronze or iron were also found. The abundance of artefacts inspired DR. W. Docters van Leeuwen and his colleagues to initiate the establishment of Sunda Open-air Museum "Soenda Openlucht Museum" in 1917, located in the hills of North Bandung area, the site of pre-historic settlements that needs conservation. Unfortunately, this plan was not implemented because shortly after, World War II occurred. The efforts of heritage conservation is one of our responsibilities as a geologist today. Open-air Museum may be one of the solutions of heritage conservation for historic sites around the world. In this paper, the study of the development of Open-air Museum will be focused on the area of Dago, North Bandung. Method used is data analysis of field surveys, and data analysis of the remaining artefacts stored at both the National Museum in Jakarta, and the Bandung Museum of Geology. The museum is based on Geotourism and further research on pre-historic culture, while its purpose is to give people a common interest and to motivate them to participate in the research and conservation of pre-historic relics. This paper will describe more details about the concept, form, and management of the geopark and the Open-air Museum within.

Keywords: geoparks, heritage conservation, open-air museum, sustainable tourism

Procedia PDF Downloads 323
24488 A Conundrum of Teachability and Learnability of Deaf Adult English as Second Language Learners in Pakistani Mainstream Classrooms: Integration or Elimination

Authors: Amnah Moghees, Saima Abbas Dar, Muniba Saeed

Abstract:

Teaching a second language to deaf learners has always been a challenge in Pakistan. Different approaches and strategies have been followed, but they have been resulted into partial or complete failure. The study aims to investigate the language problems faced by adult deaf learners of English as second language in mainstream classrooms. Moreover, the study also determines the factors which are very much involved in language teaching and learning in mainstream classes. To investigate the language problems, data will be collected through writing samples of ten deaf adult learners and ten normal ESL learners of the same class; whereas, observation in inclusive language teaching classrooms and interviews from five ESL teachers in inclusive classes will be conducted to know the factors which are directly or indirectly involved in inclusive language education. Keeping in view this study, qualitative research paradigm will be applied to analyse the corpus. The study figures out that deaf ESL learners face severe language issues such as; odd sentence structures, subject and verb agreement violation, misappropriation of verb forms and tenses as compared to normal ESL learners. The study also predicts that in mainstream classrooms there are multiple factors which are affecting the smoothness of teaching and learning procedure; role of mediator, level of deaf learners, empathy of normal learners towards deaf learners and language teacher’s training.

Keywords: deaf English language learner, empathy, mainstream classrooms, previous language knowledge of learners, role of mediator, language teachers' training

Procedia PDF Downloads 142
24487 What Smart Can Learn about Art

Authors: Faten Hatem

Abstract:

This paper explores the associated understanding of the role and meaning of art and whether it is perceived to be separate from smart city construction. The paper emphasises the significance of fulfilling the inherent need for discovery and interaction, driving people to explore new places and think of works of art. This is done by exploring the ways of thinking and types of art in Milton Keynes by illustrating a general pattern of misunderstanding that relies on the separation between smart, art, and architecture, promoting a better and deeper understanding of the interconnections between neuroscience, art, and architecture. A reflective approach is used to clarify the potential and impact of using art-based research, methodology, and ways of knowing when approaching global phenomena and knowledge production while examining the process of making and developing smart cities, in particular, asserting that factors can severely impact it in the process of conducting the study itself. It follows a case study as a research strategy. The qualitative methods included data collection and analysis that involved interviews and observations that depended on visuals.

Keywords: smart cities, art and smart, smart cities design, smart cities making, sustainability, city brain and smart cities metrics, smart cities standards, smart cities applications, governance, planning and policy

Procedia PDF Downloads 77