Search results for: cloud data privacy and integrity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25491

Search results for: cloud data privacy and integrity

22821 Functional and Efficient Query Interpreters: Principle, Application and Performances’ Comparison

Authors: Laurent Thiry, Michel Hassenforder

Abstract:

This paper presents a general approach to implement efficient queries’ interpreters in a functional programming language. Indeed, most of the standard tools actually available use an imperative and/or object-oriented language for the implementation (e.g. Java for Jena-Fuseki) but other paradigms are possible with, maybe, better performances. To proceed, the paper first explains how to model data structures and queries in a functional point of view. Then, it proposes a general methodology to get performances (i.e. number of computation steps to answer a query) then it explains how to integrate some optimization techniques (short-cut fusion and, more important, data transformations). It then compares the functional server proposed to a standard tool (Fuseki) demonstrating that the first one can be twice to ten times faster to answer queries.

Keywords: data transformation, functional programming, information server, optimization

Procedia PDF Downloads 148
22820 Dimension Free Rigid Point Set Registration in Linear Time

Authors: Jianqin Qu

Abstract:

This paper proposes a rigid point set matching algorithm in arbitrary dimensions based on the idea of symmetric covariant function. A group of functions of the points in the set are formulated using rigid invariants. Each of these functions computes a pair of correspondence from the given point set. Then the computed correspondences are used to recover the unknown rigid transform parameters. Each computed point can be geometrically interpreted as the weighted mean center of the point set. The algorithm is compact, fast, and dimension free without any optimization process. It either computes the desired transform for noiseless data in linear time, or fails quickly in exceptional cases. Experimental results for synthetic data and 2D/3D real data are provided, which demonstrate potential applications of the algorithm to a wide range of problems.

Keywords: covariant point, point matching, dimension free, rigid registration

Procedia PDF Downloads 163
22819 Metallograpy of Remelted A356 Aluminium following Squeeze Casting

Authors: Azad Hussain, Andrew Cobley

Abstract:

The demand for lightweight parts with high mechanical strength(s) and integrity, in sectors such as the aerospace and automotive is ever increasing, motivated by the need for weight reduction in order to increase fuel efficiency with components usually manufactured using a high grade primary metal or alloy. For components manufactured using the squeeze casting process, this alloy is usually A356 aluminium (Al), it is one of the most versatile Al alloys; and is used extensively in castings for demanding environments. The A356 castings provide good strength to weight ratio making it an attractive option for components where strength has to be maintained, with the added advantage of weight reduction. In addition, the versatility in castabilitiy, weldability and corrosion resistance are other attributes that provide for the A356 cast alloy to be used in a large array of industrial applications. Conversely, it is rare to use remelted Al in these cases, due the nature of the applications of components in demanding environments, were material properties must be defined to meet certain specifications for example a known strength or ductility. However the use of remelted Al, especially primary grade Al such as A356, would offer significant cost and energy savings for manufacturers using primary alloys, provided that remelted aluminium can offer similar benefits in terms of material microstructure and mechanical properties. This study presents the results of the material microstructure and properties of 100% primary A356 Al and 100% remelt Al cast, manufactured via the direct squeeze cast method. The microstructures of the castings made from remelted A356 Al were then compared with the microstructures of primary A356 Al. The outcome of using remelting Al on the microstructure was examined via different analytical techniques, optical microscopy of polished and etched surfaces, and scanning electron microscopy. Microstructural analysis of the 100% remelted Al when compared with primary Al show similar α-Al phase, primary Al dendrites, particles and eutectic constituents. Mechanical testing of cast samples will elucidate further information as to the suitability of utilising 100% remelt for casting.

Keywords: A356, microstructure, remelt, squeeze casting

Procedia PDF Downloads 200
22818 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 322
22817 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes

Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani

Abstract:

The development of the method to annotate unknown gene functions is an important task in bioinformatics. One of the approaches for the annotation is The identification of the metabolic pathway that genes are involved in. Gene expression data have been utilized for the identification, since gene expression data reflect various intracellular phenomena. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.

Keywords: metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning

Procedia PDF Downloads 393
22816 Tri/Tetra-Block Copolymeric Nanocarriers as a Potential Ocular Delivery System of Lornoxicam: Experimental Design-Based Preparation, in-vitro Characterization and in-vivo Estimation of Transcorneal Permeation

Authors: Alaa Hamed Salama, Rehab Nabil Shamma

Abstract:

Introduction: Polymeric micelles that can deliver drug to intended sites of the eye have attracted much scientific attention recently. The aim of this study was to review the aqueous-based formulation of drug-loaded polymeric micelles that hold significant promise for ophthalmic drug delivery. This study investigated the synergistic performance of mixed polymeric micelles made of linear and branched poly (ethylene oxide)-poly (propylene oxide) for the more effective encapsulation of Lornoxicam (LX) as a hydrophobic model drug. Methods: The co-micellization process of 10% binary systems combining different weight ratios of the highly hydrophilic poloxamers; Synperonic® PE/P84, and Synperonic® PE/F127 and the hydrophobic poloxamine counterpart (Tetronic® T701) was investigated by means of photon correlation spectroscopy and cloud point. The drug-loaded micelles were tested for their solubilizing capacity towards LX. Results: Results showed a sharp solubility increase from 0.46 mg/ml up to more than 4.34 mg/ml, representing about 136-fold increase. Optimized formulation was selected to achieve maximum drug solubilizing power and clarity with lowest possible particle size. The optimized formulation was characterized by 1HNMR analysis which revealed complete encapsulation of the drug within the micelles. Further investigations by histopathological and confocal laser studies revealed the non-irritant nature and good corneal penetrating power of the proposed nano-formulation. Conclusion: LX-loaded polymeric nanomicellar formulation was fabricated allowing easy application of the drug in the form of clear eye drops that do not cause blurred vision or discomfort, thus achieving high patient compliance.

Keywords: confocal laser scanning microscopy, Histopathological studies, Lornoxicam, micellar solubilization

Procedia PDF Downloads 443
22815 Impact of Instagram Food Bloggers on Consumer (Generation Z) Decision Making Process in Islamabad. Pakistan

Authors: Tabinda Sadiq, Tehmina Ashfaq Qazi, Hoor Shumail

Abstract:

Recently, the advent of emerging technology has created an emerging generation of restaurant marketing. It explores the aspects that influence customers’ decision-making process in selecting a restaurant after reading food bloggers' reviews online. The motivation behind this research is to investigate the correlation between the credibility of the source and their attitude toward restaurant visits. The researcher collected the data by distributing a survey questionnaire through google forms by employing the Source credibility theory. Non- probability purposive sampling technique was used to collect data. The questionnaire used a predeveloped and validated scale by Ohanian to measure the relationship. Also, the researcher collected data from 250 respondents in order to investigate the influence of food bloggers on Gen Z's decision-making process. SPSS statistical version 26 was used for statistical testing and analyzing the data. The findings of the survey revealed that there is a moderate positive correlation between the variables. So, it can be analyzed that food bloggers do have an impact on Generation Z's decision making process.

Keywords: credibility, decision making, food bloggers, generation z, e-wom

Procedia PDF Downloads 60
22814 Performance Measurement of Logistics Systems for Thailand's Wholesales and Retails Industries by Data Envelopment Analysis

Authors: Pornpimol Chaiwuttisak

Abstract:

The study aims to compare the performance of the logistics for Thailand’s wholesale and retail trade industries (except motor vehicles, motorcycle, and stalls) by using data (data envelopment analysis). Thailand Standard Industrial Classification in 2009 (TSIC - 2009) categories that industries into sub-group no. 45: wholesale and retail trade (except for the repair of motor vehicles and motorcycles), sub-group no. 46: wholesale trade (except motor vehicles and motorcycles), and sub-group no. 47: retail trade (except motor vehicles and motorcycles. Data used in the study is collected by the National Statistical Office, Thailand. The study consisted of four input factors include the number of companies, the number of personnel in logistics, the training cost in logistics, and outsourcing logistics management. Output factor includes the percentage of enterprises having inventory management. The results showed that the average relative efficiency of small-sized enterprises equals to 27.87 percent and 49.68 percent for the medium-sized enterprises.

Keywords: DEA, wholesales and retails, logistics, Thailand

Procedia PDF Downloads 407
22813 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 86
22812 Comparison of Different Reanalysis Products for Predicting Extreme Precipitation in the Southern Coast of the Caspian Sea

Authors: Parvin Ghafarian, Mohammadreza Mohammadpur Panchah, Mehri Fallahi

Abstract:

Synoptic patterns from surface up to tropopause are very important for forecasting the weather and atmospheric conditions. There are many tools to prepare and analyze these maps. Reanalysis data and the outputs of numerical weather prediction models, satellite images, meteorological radar, and weather station data are used in world forecasting centers to predict the weather. The forecasting extreme precipitating on the southern coast of the Caspian Sea (CS) is the main issue due to complex topography. Also, there are different types of climate in these areas. In this research, we used two reanalysis data such as ECMWF Reanalysis 5th Generation Description (ERA5) and National Centers for Environmental Prediction /National Center for Atmospheric Research (NCEP/NCAR) for verification of the numerical model. ERA5 is the latest version of ECMWF. The temporal resolution of ERA5 is hourly, and the NCEP/NCAR is every six hours. Some atmospheric parameters such as mean sea level pressure, geopotential height, relative humidity, wind speed and direction, sea surface temperature, etc. were selected and analyzed. Some different type of precipitation (rain and snow) was selected. The results showed that the NCEP/NCAR has more ability to demonstrate the intensity of the atmospheric system. The ERA5 is suitable for extract the value of parameters for specific point. Also, ERA5 is appropriate to analyze the snowfall events over CS (snow cover and snow depth). Sea surface temperature has the main role to generate instability over CS, especially when the cold air pass from the CS. Sea surface temperature of NCEP/NCAR product has low resolution near coast. However, both data were able to detect meteorological synoptic patterns that led to heavy rainfall over CS. However, due to the time lag, they are not suitable for forecast centers. The application of these two data is for research and verification of meteorological models. Finally, ERA5 has a better resolution, respect to NCEP/NCAR reanalysis data, but NCEP/NCAR data is available from 1948 and appropriate for long term research.

Keywords: synoptic patterns, heavy precipitation, reanalysis data, snow

Procedia PDF Downloads 109
22811 Application of Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM) Database in Nursing Health Problems with Prostate Cancer-a Pilot Study

Authors: Hung Lin-Zin, Lai Mei-Yen

Abstract:

Prostate cancer is the most commonly diagnosed male cancer in the U.S. The prevalence is around 1 in 8. The etiology of prostate cancer is still unknown, but some predisposing factors, such as age, black race, family history, and obesity, may increase the risk of the disease. In 2020, a total of 7,178 Taiwanese people were nearly diagnosed with prostate cancer, accounting for 5.88% of all cancer cases, and the incidence rate ranked fifth among men. In that year, the total number of deaths from prostate cancer was 1,730, accounting for 3.45% of all cancer deaths, and the death rate ranked 6th among men, accounting for 94.34% of the cases of male reproductive organs. Looking for domestic and foreign literature on the use of OMOP (Observational Medical Outcomes Partnership, hereinafter referred to as OMOP) database analysis, there are currently nearly a hundred literature published related to nursing-related health problems and nursing measures built in the OMOP general data model database of medical institutions are extremely rare. The OMOP common data model construction analysis platform is a system developed by the FDA in 2007, using a common data model (common data model, CDM) to analyze and monitor healthcare data. It is important to build up relevant nursing information from the OMOP- CDM database to assist our daily practice. Therefore, we choose prostate cancer patients who are our popular care objects and use the OMOP- CDM database to explore the common associated health problems. With the assistance of OMOP-CDM database analysis, we can expect early diagnosis and prevention of prostate cancer patients' comorbidities to improve patient care.

Keywords: OMOP, nursing diagnosis, health problem, prostate cancer

Procedia PDF Downloads 44
22810 Investigation of Learning Challenges in Building Measurement Unit

Authors: Argaw T. Gurmu, Muhammad N. Mahmood

Abstract:

The objective of this research is to identify the architecture and construction management students’ learning challenges of the building measurement. This research used the survey data obtained collected from the students who completed the building measurement unit. NVivo qualitative data analysis software was used to identify relevant themes. The analysis of the qualitative data revealed the major learning difficulties such as inadequacy of practice questions for the examination, inability to work as a team, lack of detailed understanding of the prerequisite units, insufficiency of the time allocated for tutorials and incompatibility of lecture and tutorial schedules. The output of this research can be used as a basis for improving the teaching and learning activities in construction measurement units.

Keywords: building measurement, construction management, learning challenges, evaluate survey

Procedia PDF Downloads 128
22809 Using Data-Driven Model on Online Customer Journey

Authors: Ing-Jen Hung, Tzu-Chien Wang

Abstract:

Nowadays, customers can interact with firms through miscellaneous online ads on different channels easily. In other words, customer now has innumerable options and limitless time to accomplish their commercial activities with firms, individualizing their own online customer journey. This kind of convenience emphasizes the importance of online advertisement allocation on different channels. Therefore, profound understanding of customer behavior can make considerable benefit from optimizing fund allocation on diverse ad channels. To achieve this objective, multiple firms utilize numerical methodology to create data-driven advertisement policy. In our research, we aim to exploit online customer click data to discover the correlations between each channel and their sequential relations. We use LSTM to deal with sequential property of our data and compare its accuracy with other non-sequential methods, such as CART decision tree, logistic regression, etc. Besides, we also classify our customers into several groups by their behavioral characteristics to perceive the differences between all groups as customer portrait. As a result, we discover distinct customer journey under each customer portrait. Our article provides some insights into marketing research and can help firm to formulate online advertising criteria.

Keywords: LSTM, customer journey, marketing, channel ads

Procedia PDF Downloads 110
22808 The Legal Implications of Gender Quota for Public Companies

Authors: Murat Can Pehlivanoglu

Abstract:

Historically, gender equality has been mainly defended in the legal arenas of constitutional law and employment law. However, social and economic progress has required corporate law to provide gender equality on corporate boards. Recently, following the trend in Europe, the State of California (United States) enacted a law requiring that every publicly traded corporation based in California should have women on its board of directors. Still, the legal, social and economic implications of this law are yet to be discovered. The contractarian view of corporate law is predominant in the U.S. jurisprudence. However, gender quota law may not be justified through contractarian theory grounds. Therefore, the conformity of gender quota law with the general principles of U.S. corporate law remains questionable, and the immunity of close corporations from the scope of gender quota legislation provides support for the discrepancy. The methodology employed in this paper in the discussion of the rule’s conformity with corporate law is doctrinal, and American case law and legal scholarship are the basis for this discussion. This paper uses the aforementioned California law as sample legislation to evaluate the gender quota laws’ conformity with the contractarian theory of corporate law. It chooses California law as the sample due to its newness and the presence of pending shareholder lawsuits against it. Also, since California is home to global companies, the effect of such law is expected to be wider. As alternative theories laid down by corporate law may already be activated to provide gender equality on boards of publicly traded corporations, enacting a specific gender quota law would not be justified by an allegedly present statutory deficiency based on contractarian theory. However, this theoretical reality would not enable shareholders to succeed in their lawsuits against such law on corporate law grounds, and investors will have limited options against its results. This will eventually harm the integrity of the marketplace. Through the analysis of the contractarian theory of corporate law and California gender quota law, the major finding of this paper is that the contractarian theory of corporate law does not permit mandating board room equality through corporate law. In conclusion, it expresses that the issue should be dealt with through separate legislation with a different remedial structure, to preserve the traditional rationale of corporate law in U.S. law.

Keywords: board of directors, gender equality, gender quota, publicly traded corporations

Procedia PDF Downloads 118
22807 Religion and Risk: Unmasking Noah's Narratives in the Pacific Islands

Authors: A. Kolendo

Abstract:

Pacific Islands are one of the most vulnerable areas to climate change. Sea level rise and accelerating storm surge continuously threaten the communities' habitats on low-lying atolls. With scientific predictions of encroaching tides on their land, the Islanders have been informed about the need for future relocation planning. However, some communities oppose such retreat strategies through the reasoning that comprehends current climatic changes through the lenses of the biblical ark of Noah. This parable states God's promise never to flood the Earth again and never deprive people of their land and habitats. Several interpretations of this parable emerged in Oceania, prompting either climate action or denial. Resistance to relocation planning expressed through Christian thoughts led religion to be perceived as a barrier to dialogue between the Islanders and scientists. Since climate change concerns natural processes, the attitudes towards environmental stewardship prompt the communities' responses to it; some Christian teachings indicate humanity's responsibility over the environment, whereas others ascertain the people's dominion, which prompts resistance and sometimes denial. With church denominations and their various environmental standpoints, competing responses to climate change emerged in Oceania. Before miss-ionization, traditional knowledge had guided the environmental sphere, influencing current Christian teachings. Each atoll characterizes a distinctive manner of traditional knowledge; however, the unique relationship with nature unites all islands. The interconnectedness between the land, sea and people indicates the integrity between the communities and their environments. Such a factor influences the comprehension of Noah's story in the context of climate change that threatens their habitats. Pacific Islanders experience climate change through the slow disappearance of their homelands. However, the Western world perceives it as a global issue that will affect the population in the long-term perspective. Therefore, the Islanders seek to comprehend this global phenomenon in a local context that reads climate change as the Great Deluge. Accordingly, the safety measures that this parable promotes compensate for the danger of climate change. The rainbow covenant gives hope in God's promise never to flood the Earth again. At the same time, Noah's survival relates to the Islanders' current situation. Since these communities have the lowest carbon emissions rate, their contribution to anthropogenic climate change is scarce. Therefore, the lack of environmental sin would contextualize them as contemporary Noah with the ultimate survival of sea level rise. This study aims to defy religion constituting a barrier through secondary data analysis from a risk compensation perspective. Instead, religion is portrayed as a source of knowledge that enables comprehension of the communities' situation. By demonstrating that the Pacific Islanders utilize Noah's story as a vessel for coping with the danger of climate change, the study argues that religion provides safety measures that compensate for the future projections of land's disappearance. The purpose is to build a bridge between religious communities and scientific bodies and ultimately bring an understanding of two diverse perspectives. By addressing the practical challenges of interdisciplinary research with faith-based systems, this study uplifts the voices of communities and portrays their experiences expressed through Christian thoughts.

Keywords: Christianity, climate change, existential threat, Pacific Islands, story of Noah

Procedia PDF Downloads 79
22806 A Secure Proxy Signature Scheme with Fault Tolerance Based on RSA System

Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi

Abstract:

Due to the rapid growth in modern communication systems, fault tolerance and data security are two important issues in a secure transaction. During the transmission of data between the sender and receiver, errors may occur frequently. Therefore, the sender must re-transmit the data to the receiver in order to correct these errors, which makes the system very feeble. To improve the scalability of the scheme, we present a secure proxy signature scheme with fault tolerance over an efficient and secure authenticated key agreement protocol based on RSA system. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties.

Keywords: proxy signature, fault tolerance, rsa, key agreement protocol

Procedia PDF Downloads 275
22805 Estimating the Receiver Operating Characteristic Curve from Clustered Data and Case-Control Studies

Authors: Yalda Zarnegarnia, Shari Messinger

Abstract:

Receiver operating characteristic (ROC) curves have been widely used in medical research to illustrate the performance of the biomarker in correctly distinguishing the diseased and non-diseased groups. Correlated biomarker data arises in study designs that include subjects that contain same genetic or environmental factors. The information about correlation might help to identify family members at increased risk of disease development, and may lead to initiating treatment to slow or stop the progression to disease. Approaches appropriate to a case-control design matched by family identification, must be able to accommodate both the correlation inherent in the design in correctly estimating the biomarker’s ability to differentiate between cases and controls, as well as to handle estimation from a matched case control design. This talk will review some developed methods for ROC curve estimation in settings with correlated data from case control design and will discuss the limitations of current methods for analyzing correlated familial paired data. An alternative approach using Conditional ROC curves will be demonstrated, to provide appropriate ROC curves for correlated paired data. The proposed approach will use the information about the correlation among biomarker values, producing conditional ROC curves that evaluate the ability of a biomarker to discriminate between diseased and non-diseased subjects in a familial paired design.

Keywords: biomarker, correlation, familial paired design, ROC curve

Procedia PDF Downloads 230
22804 Code-Switching among Local UCSI Stem and N-Stem Undergraduates during Knowledge Sharing

Authors: Adeela Abu Bakar, Minder Kaur, Parthaman Singh

Abstract:

In the Malaysian education system, a formal setting of English language learning takes place in a content-based classroom (CBC). Until recently, there is less study in Malaysia, which researched the effects of code-switching (CS) behaviour towards the students’ knowledge sharing (KS) with their peers. The aim of this study is to investigate the frequency, reasons, and effect that CS, from the English language to Bahasa Melayu, has among local STEM and N-STEM undergraduates towards KS in a content-based classroom. The study implies a mixed-method research design with questionnaire and interviews as the instruments. The data is collected through distribution of questionnaires and interviews with the undergraduates. The quantitative data is analysed using SPSS in simple frequencies and percentages, whereas qualitative data involves organizing the data into themes, followed by analysis. Findings found that N-STEM undergraduates code-switch more as compared to STEM undergraduates. In addition to that, both the STEM and N-STEM undergraduates agree that CS acts as a catalyst towards KS in a content-based classroom. However, they also acknowledge that excess use of CS can be a hindrance towards KS. The findings of the study can benefit STEM and N-STEM undergraduates, education policymakers, language teachers, university educators, and students with significant insights into the role of CS towards KS in a content-based classroom. Some of the recommendations that can be applied for future studies are that the number of participants can be increased, an observation to be included for the data collection.

Keywords: switching, content-based classroom, content and language integrated learning, knowledge sharing, STEM and N-STEM undergraduates

Procedia PDF Downloads 126
22803 Fuzzy Multi-Component DEA with Shared and Undesirable Fuzzy Resources

Authors: Jolly Puri, Shiv Prasad Yadav

Abstract:

Multi-component data envelopment analysis (MC-DEA) is a popular technique for measuring aggregate performance of the decision making units (DMUs) along with their components. However, the conventional MC-DEA is limited to crisp input and output data which may not always be available in exact form. In real life problems, data may be imprecise or fuzzy. Therefore, in this paper, we propose (i) a fuzzy MC-DEA (FMC-DEA) model in which shared and undesirable fuzzy resources are incorporated, (ii) the proposed FMC-DEA model is transformed into a pair of crisp models using cut approach, (iii) fuzzy aggregate performance of a DMU and fuzzy efficiencies of components are defined to be fuzzy numbers, and (iv) a numerical example is illustrated to validate the proposed approach.

Keywords: multi-component DEA, fuzzy multi-component DEA, fuzzy resources, decision making units (DMUs)

Procedia PDF Downloads 396
22802 A Computational Cost-Effective Clustering Algorithm in Multidimensional Space Using the Manhattan Metric: Application to the Global Terrorism Database

Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami

Abstract:

The increasing amount of collected data has limited the performance of the current analyzing algorithms. Thus, developing new cost-effective algorithms in terms of complexity, scalability, and accuracy raised significant interests. In this paper, a modified effective k-means based algorithm is developed and experimented. The new algorithm aims to reduce the computational load without significantly affecting the quality of the clusterings. The algorithm uses the City Block distance and a new stop criterion to guarantee the convergence. Conducted experiments on a real data set show its high performance when compared with the original k-means version.

Keywords: pattern recognition, global terrorism database, Manhattan distance, k-means clustering, terrorism data analysis

Procedia PDF Downloads 376
22801 AniMoveMineR: Animal Behavior Exploratory Analysis Using Association Rules Mining

Authors: Suelane Garcia Fontes, Silvio Luiz Stanzani, Pedro L. Pizzigatti Corrła Ronaldo G. Morato

Abstract:

Environmental changes and major natural disasters are most prevalent in the world due to the damage that humanity has caused to nature and these damages directly affect the lives of animals. Thus, the study of animal behavior and their interactions with the environment can provide knowledge that guides researchers and public agencies in preservation and conservation actions. Exploratory analysis of animal movement can determine the patterns of animal behavior and with technological advances the ability of animals to be tracked and, consequently, behavioral studies have been expanded. There is a lot of research on animal movement and behavior, but we note that a proposal that combines resources and allows for exploratory analysis of animal movement and provide statistical measures on individual animal behavior and its interaction with the environment is missing. The contribution of this paper is to present the framework AniMoveMineR, a unified solution that aggregates trajectory analysis and data mining techniques to explore animal movement data and provide a first step in responding questions about the animal individual behavior and their interactions with other animals over time and space. We evaluated the framework through the use of monitored jaguar data in the city of Miranda Pantanal, Brazil, in order to verify if the use of AniMoveMineR allows to identify the interaction level between these jaguars. The results were positive and provided indications about the individual behavior of jaguars and about which jaguars have the highest or lowest correlation.

Keywords: data mining, data science, trajectory, animal behavior

Procedia PDF Downloads 129
22800 A Study on Using Network Coding for Packet Transmissions in Wireless Sensor Networks

Authors: Rei-Heng Cheng, Wen-Pinn Fang

Abstract:

A wireless sensor network (WSN) is composed by a large number of sensors and one or a few base stations, where the sensor is responsible for detecting specific event information, which is sent back to the base station(s). However, how to save electricity consumption to extend the network lifetime is a problem that cannot be ignored in the wireless sensor networks. Since the sensor network is used to monitor a region or specific events, how the information can be reliably sent back to the base station is surly important. Network coding technique is often used to enhance the reliability of the network transmission. When a node needs to send out M data packets, it encodes these data with redundant data and sends out totally M + R packets. If the receiver can get any M packets out from these M + R packets, it can decode and get the original M data packets. To transmit redundant packets will certainly result in the excess energy consumption. This paper will explore relationship between the quality of wireless transmission and the number of redundant packets. Hopefully, each sensor can overhear the nearby transmissions, learn the wireless transmission quality around it, and dynamically determine the number of redundant packets used in network coding.

Keywords: energy consumption, network coding, transmission reliability, wireless sensor networks

Procedia PDF Downloads 382
22799 Pattern the Location and Area of Earth-Dumping Stations from Vehicle GPS Data in Taiwan

Authors: Chun-Yuan Chen, Ming-Chang Li, Xiu-Hui Wen, Yi-Ching Tu

Abstract:

The objective of this study explores GPS (Global Positioning System) applied to trace construction vehicles such as trucks or cranes, help to pattern the earth-dumping stations of traffic construction in Taiwan. Traffic construction in this research is defined as the engineering of high-speed railways, expressways, and which that distance more than kilometers. Audit the location and check the compliance with regulations of earth-dumping stations is one of important tasks in Taiwan EPA. Basically, the earth-dumping station was known as one source of particulate matter from air pollution during construction process. Due to GPS data can be analyzed quickly and be used conveniently, this study tried to find out dumping stations by modeling vehicles tracks from GPS data during work cycle of construction. The GPS data updated from 13 vehicles related to an expressway construction in central Taiwan. The GPS footprints were retrieved to Keyhole Markup Language (KML) files so that can pattern the tracks of trucks by computer applications, the data was collected about eight months- from Feb. to Oct. in 2017. The results of GPS footprints identified dumping station and outlined the areas of earthwork had been passed to the Taiwan EPA for on-site inspection. Taiwan EPA had issued advice comments to the agency which was in charge of the construction to prevent the air pollution. According to the result of this study compared to the commonly methods in inspecting environment by manual collection, the GPS with KML patterning and modeling method can consumes less time. On the other hand, through monitoring the GPS data from construction vehicles could be useful for administration to development and implementation of strategies in environmental management.

Keywords: automatic management, earth-dumping station, environmental management, Global Positioning System (GPS), particulate matter, traffic construction

Procedia PDF Downloads 157
22798 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies

Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan

Abstract:

The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.

Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping

Procedia PDF Downloads 84
22797 Genetics, Law and Society: Regulating New Genetic Technologies

Authors: Aisling De Paor

Abstract:

Scientific and technological developments are driving genetics and genetic technologies into the public sphere. Scientists are making genetic discoveries as to the make up of the human body and the cause and effect of disease, diversity and disability amongst individuals. Technological innovation in the field of genetics is also advancing, with the development of genetic testing, and other emerging genetic technologies, including gene editing (which offers the potential for genetic modification). In addition to the benefits for medicine, health care and humanity, these genetic advances raise a range of ethical, legal and societal concerns. From an ethical perspective, such advances may, for example, change the concept of humans and what it means to be human. Science may take over in conceptualising human beings, which may push the boundaries of existing human rights. New genetic technologies, particularly gene editing techniques create the potential to stigmatise disability, by highlighting disability or genetic difference as something that should be eliminated or anticipated. From a disability perspective, use (and misuse) of genetic technologies raise concerns about discrimination and violations to the dignity and integrity of the individual. With an acknowledgement of the likely future orientation of genetic science, and in consideration of the intersection of genetics and disability, this paper highlights the main concerns raised as genetic science and technology advances (particularly with gene editing developments), and the consequences for disability and human rights. Through the use of traditional doctrinal legal methodologies, it investigates the use (and potential misuse) of gene editing as creating the potential for a unique form of discrimination and stigmatization to develop, as well as a potential gateway to a form of new, subtle eugenics. This article highlights the need to maintain caution as to the use, application and the consequences of genetic technologies. With a focus on the law and policy position in Europe, it examines the need to control and regulate these new technologies, particularly gene editing. In addition to considering the need for regulation, this paper highlights non-normative approaches to address this area, including awareness raising and education, public discussion and engagement with key stakeholders in the field and the development of a multifaceted genetics advisory network.

Keywords: disability, gene-editing, genetics, law, regulation

Procedia PDF Downloads 346
22796 Technico-Economical Study of a Rapeseed Based Biorefinery Using High Voltage Electrical Discharges and Ultrasounds as Pretreatment Technologies

Authors: Marwa Brahim, Nicolas Brosse, Nadia Boussetta, Nabil Grimi, Eugene Vorobiev

Abstract:

Rapeseed plant is an established product in France which is mainly dedicated to oil production. However, the economic potential of residues from this industry (rapeseed hulls, rapeseed cake, rapeseed straw etc.), has not been fully exploited. Currently, only low-grade applications are found in the market. As a consequence, it was deemed of interest to develop a technological platform aiming to convert rapeseed residues into value- added products. Specifically, a focus is given on the conversion of rapeseed straw into valuable molecules (e.g. lignin, glucose). Existing pretreatment technologies have many drawbacks mainly the production of sugar degradation products that limit the effectiveness of saccharification and fermentation steps in the overall scheme of the lignocellulosic biorefinery. In addition, the viability of fractionation strategies is a challenge in an environmental context increasingly standardized. Hence, the need to find cleaner alternatives with comparable efficiency by implementing physical phenomena that could destabilize the structural integrity of biomass without necessarily using chemical solvents. To meet environmental standards increasingly stringent, the present work aims to study the new pretreatment strategies involving lower consumption of chemicals with an attenuation of the severity of the treatment. These strategies consist on coupling physical treatments either high voltage electrical discharges or ultrasounds to conventional chemical pretreatments (soda and organosolv). Ultrasounds treatment is based on the cavitation phenomenon, and high voltage electrical discharges cause an electrical breakdown accompanied by many secondary phenomena. The choice of process was based on a technological feasibility study taking into account the economic profitability of the whole chain after products valorization. Priority was given to sugars valorization into bioethanol and lignin sale.

Keywords: high voltage electrical discharges, organosolv, pretreatment strategies, rapeseed straw, soda, ultrasounds

Procedia PDF Downloads 353
22795 Non-Parametric Regression over Its Parametric Couterparts with Large Sample Size

Authors: Jude Opara, Esemokumo Perewarebo Akpos

Abstract:

This paper is on non-parametric linear regression over its parametric counterparts with large sample size. Data set on anthropometric measurement of primary school pupils was taken for the analysis. The study used 50 randomly selected pupils for the study. The set of data was subjected to normality test, and it was discovered that the residuals are not normally distributed (i.e. they do not follow a Gaussian distribution) for the commonly used least squares regression method for fitting an equation into a set of (x,y)-data points using the Anderson-Darling technique. The algorithms for the nonparametric Theil’s regression are stated in this paper as well as its parametric OLS counterpart. The use of a programming language software known as “R Development” was used in this paper. From the analysis, the result showed that there exists a significant relationship between the response and the explanatory variable for both the parametric and non-parametric regression. To know the efficiency of one method over the other, the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) are used, and it is discovered that the nonparametric regression performs better than its parametric regression counterparts due to their lower values in both the AIC and BIC. The study however recommends that future researchers should study a similar work by examining the presence of outliers in the data set, and probably expunge it if detected and re-analyze to compare results.

Keywords: Theil’s regression, Bayesian information criterion, Akaike information criterion, OLS

Procedia PDF Downloads 297
22794 Improving the Performance of Requisition Document Online System for Royal Thai Army by Using Time Series Model

Authors: D. Prangchumpol

Abstract:

This research presents a forecasting method of requisition document demands for Military units by using Exponential Smoothing methods to analyze data. The data used in the forecast is an actual data requisition document of The Adjutant General Department. The results of the forecasting model to forecast the requisition of the document found that Holt–Winters’ trend and seasonality method of α=0.1, β=0, γ=0 is appropriate and matches for requisition of documents. In addition, the researcher has developed a requisition online system to improve the performance of requisition documents of The Adjutant General Department, and also ensuring that the operation can be checked.

Keywords: requisition, holt–winters, time series, royal thai army

Procedia PDF Downloads 294
22793 Geoelectric Survey for Groundwater Potential in Waziri Umaru Federal Polytechnic, Birnin Kebbi, Nigeria

Authors: Ibrahim Mohammed, Suleiman Taofiq, Muhammad Naziru Yahya

Abstract:

Geoelectrical measurements using Schlumberger Vertical Electrical Sounding (VES) method were carried out in Waziri Umaru Federal Polytechnic, Birnin Kebbi, Nigeria, with the aim of determining the groundwater potential in the area. Twelve (12) Vertical Electric Sounding (VES) data were collected using Terrameter (ABEM SAS 300c) and analyzed using computer software (IPI2win), which gives an automatic interpretation of the apparent resistivity. The results of the interpretation of VES data were used in the characterization of three to five geo-electric layers from which the aquifer units were delineated. Data analysis indicated that water bearing formation exists in the third and fourth layers having resistivity range of 312 to 767 Ωm and 9.51 to 681 Ωm, respectively. The thickness of the formation ranges from 14.7 to 41.8 m, while the depth is from 8.22 to 53.7 m. Based on the result obtained from the interpretation of the data, five (5) VES stations were recommended as the most viable locations for groundwater exploration in the study area. The VES stations include VES A4, A5, A6, B1, and B2. The VES results of the entire area indicated that the water bearing formation occurs at maximum depth of 53.7 m at the time of this survey.

Keywords: aquifer, depth, groundwater, resistivity, Schlumberger

Procedia PDF Downloads 154
22792 Collection, Cryopreservation, and Fertilizing Potential of Bovine Spermatozoa Collected from the Epididymis Evaluated by Conventional Techniques and by Flow Cytometry

Authors: M. H. Moreira da Silva, L. Valadao, F. Moreira da Silva

Abstract:

In the present study, the fertilizing capacity of bovine spermatozoa was evaluated before and after its cryopreservation. For this, the testicles of 100 bulls slaughtered on Terceira Island were dissected, the epididymal tails were separated, and semen was recovered by the flotation method and then evaluated by phase contrast microscopy and by flow cytometry. For phase contrast microscopy, a drop of semen was used to evaluate the percentage of motile spermatozoa (from 0 to 100%) and motility (from 0 to 5). After determining the concentration and the abnormal forms, semen was diluted to a final concentration of 50 x 106 spz/ml and evaluated by flow cytometer for membrane and acrosome integrity using the conjugation of fluorescent probes propidium iodide (PI) and Arachis hypogea agglutinin (FITC-PNA). Freezing was carried out in a programmable semen freezer, using 0.25 ml straws, in a total of 20 x 106 viable sperm per straw with glycerol as a cryoprotectant in a final concentration of 0.58 M. It was observed that, on average, a total of 7.25 ml of semen was collected from each bull. The viability and vitality rates were respectively 83.22 ± 7.52% and 3.8 ± 0.4 before freezing, decreasing to 58.81 ± 11.99% and 3.6 ± 0.6, respectively, after thawing. Regarding cytoplasmic droplets, it was observed that a high percentage of spermatozoa had medial cytoplasmic droplets (38.47%), with only 3.32% and 0.15% presenting proximal and distal cytoplasmic drops, respectively. By flow cytometry, it was observed that before freezing, the percentage of sperm with the damaged plasma membrane and intact acrosome was 3.61 ± 0.99%, increasing slightly to 4.21 ± 1.86% after cryopreservation (p<0.05). Regarding spermatozoa with damaged plasma membrane and acrosome, the percentage before freezing was 3.37±1.87%, increasing to 4.34 ±1.16% after thawing, and no significant differences were observed between these two values. For the percentage of sperm with the intact plasma membrane and damaged acrosome, this value was 2.04 ± 2.34% before freezing, decreasing to 0.89 ± 0.48% after thawing (p<0.05). The percentage of sperm with the intact plasma membrane and acrosome before freezing was 90.99±2.75%, with a slight decrease to 90.57±3.15% after thawing (p<0.05). From this study, it can be clearly concluded that, after the slaughtering of bulls, the spermatozoa can be recovered from the epididymis and cryopreserved, maintaining an excellent rate of sperm viability and quality after thawing.

Keywords: bovine semen, epididymis, cryopreservation, fertility assessment

Procedia PDF Downloads 74