Search results for: scientific data mining
24362 Mixtures of Length-Biased Weibull Distributions for Loss Severity Modelling
Authors: Taehan Bae
Abstract:
In this paper, a class of length-biased Weibull mixtures is presented to model loss severity data. The proposed model generalizes the Erlang mixtures with the common scale parameter, and it shares many important modelling features, such as flexibility to fit various data distribution shapes and weak-denseness in the class of positive continuous distributions, with the Erlang mixtures. We show that the asymptotic tail estimate of the length-biased Weibull mixture is Weibull-type, which makes the model effective to fit loss severity data with heavy-tailed observations. A method of statistical estimation is discussed with applications on real catastrophic loss data sets.Keywords: Erlang mixture, length-biased distribution, transformed gamma distribution, asymptotic tail estimate, EM algorithm, expectation-maximization algorithm
Procedia PDF Downloads 22424361 Robust Data Image Watermarking for Data Security
Authors: Harsh Vikram Singh, Ankur Rai, Anand Mohan
Abstract:
In this paper, we propose secure and robust data hiding algorithm based on DCT by Arnold transform and chaotic sequence. The watermark image is scrambled by Arnold cat map to increases its security and then the chaotic map is used for watermark signal spread in middle band of DCT coefficients of the cover image The chaotic map can be used as pseudo-random generator for digital data hiding, to increase security and robustness .Performance evaluation for robustness and imperceptibility of proposed algorithm has been made using bit error rate (BER), normalized correlation (NC), and peak signal to noise ratio (PSNR) value for different watermark and cover images such as Lena, Girl, Tank images and gain factor .We use a binary logo image and text image as watermark. The experimental results demonstrate that the proposed algorithm achieves higher security and robustness against JPEG compression as well as other attacks such as addition of noise, low pass filtering and cropping attacks compared to other existing algorithm using DCT coefficients. Moreover, to recover watermarks in proposed algorithm, there is no need to original cover image.Keywords: data hiding, watermarking, DCT, chaotic sequence, arnold transforms
Procedia PDF Downloads 51624360 An Empirical Investigation of Big Data Analytics: The Financial Performance of Users versus Vendors
Authors: Evisa Mitrou, Nicholas Tsitsianis, Supriya Shinde
Abstract:
In the age of digitisation and globalisation, businesses have shifted online and are investing in big data analytics (BDA) to respond to changing market conditions and sustain their performance. Our study shifts the focus from the adoption of BDA to the impact of BDA on financial performance. We explore the financial performance of both BDA-vendors (business-to-business) and BDA-clients (business-to-customer). We distinguish between the five BDA-technologies (big-data-as-a-service (BDaaS), descriptive, diagnostic, predictive, and prescriptive analytics) and discuss them individually. Further, we use four perspectives (internal business process, learning and growth, customer, and finance) and discuss the significance of how each of the five BDA-technologies affects the performance measures of these four perspectives. We also present the analysis of employee engagement, average turnover, average net income, and average net assets for BDA-clients and BDA-vendors. Our study also explores the effect of the COVID-19 pandemic on business continuity for both BDA-vendors and BDA-clients.Keywords: BDA-clients, BDA-vendors, big data analytics, financial performance
Procedia PDF Downloads 12524359 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data
Authors: Saeid Gharechelou, Ryutaro Tateishi
Abstract:
Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid damage monitoring, 2015-Nepal earthquake
Procedia PDF Downloads 17324358 Scheduling Nodes Activity and Data Communication for Target Tracking in Wireless Sensor Networks
Authors: AmirHossein Mohajerzadeh, Mohammad Alishahi, Saeed Aslishahi, Mohsen Zabihi
Abstract:
In this paper, we consider sensor nodes with the capability of measuring the bearings (relative angle to the target). We use geometric methods to select a set of observer nodes which are responsible for collecting data from the target. Considering the characteristics of target tracking applications, it is clear that significant numbers of sensor nodes are usually inactive. Therefore, in order to minimize the total network energy consumption, a set of sensor nodes, called sentinel, is periodically selected for monitoring, controlling the environment and transmitting data through the network. The other nodes are inactive. Furthermore, the proposed algorithm provides a joint scheduling and routing algorithm to transmit data between network nodes and the fusion center (FC) in which not only provides an efficient way to estimate the target position but also provides an efficient target tracking. Performance evaluation confirms the superiority of the proposed algorithm.Keywords: coverage, routing, scheduling, target tracking, wireless sensor networks
Procedia PDF Downloads 38024357 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data
Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin
Abstract:
Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.Keywords: apartment complex, big data, life-cycle building value analysis, machine learning
Procedia PDF Downloads 37524356 Persistence of DNA on Clothes Contaminated by Semen Stains after Washing
Authors: Ashraf Shebl, Bassam Garah, Radah Youssef
Abstract:
Sexual assault is usually a hidden crime where the only witnesses are the victim and the assailant. For a variety of reasons, even the victim may be unable to provide a detailed account of the assault or the identity of the perpetrator. Often the case history deteriorates into one person’s word against another. With such limited initial information, the physical and biological evidence collected from the victim, from the crime scene, and from the suspect will play a pivotal role in the objective and scientific reconstruction of the events in question. The aim of work is to examine whether DNA profiles could be recovered from repeated washed clothes after contaminated by semen stains. Fresh semen about 1ml. ( <1 h old) taken from donor was deposited on four types of clothes (cotton, silk, polyester, and jeans). Then leave to dry in room temperature and washed by washing machine at temperature (30°C-60°C) and by hand washing. Some items of clothing were washed once, some twice and others three times. DNA could be extracted from some of these samples even after multiple washing. This study demonstrates that complete DNA profiles can be obtained from washed semen stains on different types of clothes, even after many repeated washing. These results indicated that clothes of the victims must be examined even if they were washed many times.Keywords: sexual assault, DNA, persistence, clothes
Procedia PDF Downloads 20024355 Walls, Barriers, and Fences to Informal Political Economy of Land Resource Accesses: A Case of Banyabunagana Along with Uganda–Congo Border, South Western Uganda, Kisoro District
Authors: Niringiye Fred
Abstract:
Banyabunagana has always had access to land resources for grazing animals, sand mining, and farmland across the border in the Democratic Republic of Congo during the pre-colonial and colonial times, usually on an informal arrangement facilitated by kinship ties and rent transactions for these resources. However, in recent periods, the government of the Democratic Republic of the Congo (DRC) has been pursuing a policy of constructing barriers such as walls and fences so that Banyabunagana communities do not access the land on the DRC side of the border. This is happening in the background of increased and intensified demand for land use on the side of the Ugandan community. This paper will attempt to discuss the reasons behind the construction of walls, fences, and other barriers which deny access to land for Banyabunagana communities in Bunagana Parish, Muramba Sub-county- Kisoro district, Uganda. The research will attempt to answer the following main questions, among others, whether there are the factors that explain the construction of walls and fences which could limit or deny access to the informal use of land and other resources and whether policy options to ensure continued access to land and other resources for local communities.Keywords: border, walls, fences, land resource access
Procedia PDF Downloads 12824354 The Role of Creative Works Dissemination Model in EU Copyright Law Modernization
Authors: Tomas Linas Šepetys
Abstract:
In online content-sharing service platforms, the ability of creators to restrict illicit use of audiovisual creative works has effectively been abolished, largely due to specific infrastructure where a huge volume of copyrighted audiovisual content can be made available to the public. The European Union legislator has attempted to strengthen the positions of creators in the realm of online content-sharing services. Article 17 of the new Digital Single Market Directive considers online content-sharing service providers to carry out acts of communication to the public of any creative content uploaded to their platforms by users and posits requirements to obtain licensing agreements. While such regulation intends to assert authors‘ ability to effectively control the dissemination of their creative works, it also creates threats of parody content overblocking through automated content monitoring. Such potentially paradoxical outcome of the efforts of the EU legislator to deliver economic safeguards for the creators in the online content-sharing service platforms leads to presume lack of informity on legislator‘s part regarding creative works‘ economic exploitation opportunities provided to creators in the online content-sharing infrastructure. Analysis conducted in this scientific research discloses that the aforementioned irregularities of parody and other creative content dissemination are caused by EU legislators‘ lack of assessment of value extraction conditions for parody creators in the online content-sharing service platforms. Historical and modeling research method application reveals the existence of two creative content dissemination models and their unique mechanisms of commercial value creation. Obligations to obtain licenses and liability over creative content uploaded to their platforms by users set in Article 17 of the Digital Single Market Directive represent technological replication of the proprietary dissemination model where the creator is able to restrict access to creative content apart from licensed retail channels. The online content-sharing service platforms represent an open dissemination model where the economic potential of creative content is based on the infrastructure of unrestricted access by users and partnership with advertising services offered by the platform. Balanced modeling of proprietary dissemination models in such infrastructure requires not only automated content monitoring measures but also additional regulatory monitoring solutions to separate parody and other types of creative content. An example of the Digital Single Market Directive proves that regulation can dictate not only the technological establishment of a proprietary dissemination model but also a partial reduction of the open dissemination model and cause a disbalance between the economic interests of creators relying on such models. The results of this scientific research conclude an informative role of the creative works dissemination model in the EU copyright law modernization process. A thorough understanding of the commercial prospects of the open dissemination model intrinsic to the online content-sharing service platform structure requires and encourages EU legislators to regulate safeguards for parody content dissemination. Implementing such safeguards would result in a common application of proprietary and open dissemination models in the online content-sharing service platforms and balanced protection of creators‘ economic interests explicitly based on those creative content dissemination models.Keywords: copyright law, creative works dissemination model, digital single market directive, online content-sharing services
Procedia PDF Downloads 7724353 Blockchain Technology Security Evaluation: Voting System Based on Blockchain
Authors: Omid Amini
Abstract:
Nowadays, technology plays the most important role in the life of human beings because people use technology to share data and to communicate with each other, but the challenge is the security of this data. For instance, as more people turn to technology in the world, more data is generated, and more hackers try to steal or infiltrate data. In addition, the data is under the control of the central authority, which can trigger the challenge of losing information and changing information; this can create widespread anxiety for different people in different communities. In this paper, we sought to investigate Blockchain technology that can guarantee information security and eliminate the challenge of central authority access to information. Now a day, people are suffering from the current voting system. This means that the lack of transparency in the voting system is a big problem for society and the government in most countries, but blockchain technology can be the best alternative to the previous voting system methods because it removes the most important challenge for voting. According to the results, this research can be a good start to getting acquainted with this new technology, especially on the security part and familiarity with how to use a voting system based on blockchain in the world. At the end of this research, it is concluded that the use of blockchain technology can solve the major security problem and lead to a secure and transparent election.Keywords: blockchain, technology, security, information, voting system, transparency
Procedia PDF Downloads 13424352 Application of the Concept of Comonotonicity in Option Pricing
Authors: A. Chateauneuf, M. Mostoufi, D. Vyncke
Abstract:
Monte Carlo (MC) simulation is a technique that provides approximate solutions to a broad range of mathematical problems. A drawback of the method is its high computational cost, especially in a high-dimensional setting, such as estimating the Tail Value-at-Risk for large portfolios or pricing basket options and Asian options. For these types of problems, one can construct an upper bound in the convex order by replacing the copula by the comonotonic copula. This comonotonic upper bound can be computed very quickly, but it gives only a rough approximation. In this paper we introduce the Comonotonic Monte Carlo (CoMC) simulation, by using the comonotonic approximation as a control variate. The CoMC is of broad applicability and numerical results show a remarkable speed improvement. We illustrate the method for estimating Tail Value-at-Risk and pricing basket options and Asian options when the logreturns follow a Black-Scholes model or a variance gamma model.Keywords: control variate Monte Carlo, comonotonicity, option pricing, scientific computing
Procedia PDF Downloads 51624351 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 17024350 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 16024349 Design and Implementation of Flexible Metadata Editing System for Digital Contents
Authors: K. W. Nam, B. J. Kim, S. J. Lee
Abstract:
Along with the development of network infrastructures, such as high-speed Internet and mobile environment, the explosion of multimedia data is expanding the range of multimedia services beyond voice and data services. Amid this flow, research is actively being done on the creation, management, and transmission of metadata on digital content to provide different services to users. This paper proposes a system for the insertion, storage, and retrieval of metadata about digital content. The metadata server with Binary XML was implemented for efficient storage space and retrieval speeds, and the transport data size required for metadata retrieval was simplified. With the proposed system, the metadata could be inserted into the moving objects in the video, and the unnecessary overlap could be minimized by improving the storage structure of the metadata. The proposed system can assemble metadata into one relevant topic, even if it is expressed in different media or in different forms. It is expected that the proposed system will handle complex network types of data.Keywords: video, multimedia, metadata, editing tool, XML
Procedia PDF Downloads 17324348 Development of Polymeric Fluorescence Sensor for the Determination of Bisphenol-A
Authors: Neşe Taşci, Soner Çubuk, Ece Kök Yetimoğlu, M. Vezir Kahraman
Abstract:
Bisphenol-A (BPA), 2,2-bis(4-hydroxyphenly)propane, is one of the highest usage volume chemicals in the world. Studies showed that BPA maybe has negative effects on the central nervous system, immune and endocrine systems. Several of analytical methods for the analysis of BPA have been reported including electrochemical processes, chemical oxidation, ozonization, spectrophotometric, chromatographic techniques. Compared with other conventional analytical techniques, optic sensors are reliable, providing quick results, low cost, easy to use, stands out as a much more advantageous method because of the high precision and sensitivity. In this work, a new photocured polymeric fluorescence sensor was prepared and characterized for Bisphenol-A (BPA) analysis. Characterization of the membrane was carried out by Attenuated Total Reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR) and Scanning Electron Microscope (SEM) techniques. The response characteristics of the sensor including dynamic range, pH effect and response time were systematically investigated. Acknowledgment: This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Grant 115Y469.Keywords: bisphenol-a, fluorescence, photopolymerization, polymeric sensor
Procedia PDF Downloads 23724347 System for Monitoring Marine Turtles Using Unstructured Supplementary Service Data
Authors: Luís Pina
Abstract:
The conservation of marine biodiversity keeps ecosystems in balance and ensures the sustainable use of resources. In this context, technological resources have been used for monitoring marine species to allow biologists to obtain data in real-time. There are different mobile applications developed for data collection for monitoring purposes, but these systems are designed to be utilized only on third-generation (3G) phones or smartphones with Internet access and in rural parts of the developing countries, Internet services and smartphones are scarce. Thus, the objective of this work is to develop a system to monitor marine turtles using Unstructured Supplementary Service Data (USSD), which users can access through basic mobile phones. The system aims to improve the data collection mechanism and enhance the effectiveness of current systems in monitoring sea turtles using any type of mobile device without Internet access. The system will be able to report information related to the biological activities of marine turtles. Also, it will be used as a platform to assist marine conservation entities to receive reports of illegal sales of sea turtles. The system can also be utilized as an educational tool for communities, providing knowledge and allowing the inclusion of communities in the process of monitoring marine turtles. Therefore, this work may contribute with information to decision-making and implementation of contingency plans for marine conservation programs.Keywords: GSM, marine biology, marine turtles, unstructured supplementary service data (USSD)
Procedia PDF Downloads 20624346 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses
Authors: Matthew Baucum
Abstract:
With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.Keywords: FMRI, machine learning, meta-analysis, text analysis
Procedia PDF Downloads 45024345 The Trend of Injuries in Building Fire in Tehran from 2002 to 2012
Authors: Mohammadreza Ashouri, Majid Bayatian
Abstract:
Analysis of fire data is a way for the implementation of any plan to improve the level of safety in cities. Such an analysis is able to reveal signs of changes in a given period and can be used as a measure of safety. The information of about 66,341 fires (from 2002 to 2012) released by Tehran Safety Services and Fire-Fighting Organization and data on the population and the number of households provided by Tehran Municipality and the Statistical Yearbook of Iran were extracted. Using the data, the fire changes, the rate of injuries, and mortality rate were determined and analyzed. The rate of injuries and mortality rate of fires per one million population of Tehran were 59.58% and 86.12%, respectively. During the study period, the number of fires and fire stations increased by 104.38% and 102.63%, respectively. Most fires (9.21%) happened in the 4th District of Tehran. The results showed that the recorded fire data have not been systematically planned for fire prevention since one of the ways to reduce injuries caused by fires is to develop a systematic plan for necessary actions in emergency situations. To determine a reliable source for fire prevention, the stages, definitions of working processes and the cause and effect chains should be considered. Therefore, a comprehensive statistical system should be developed for reported and recorded fire data.Keywords: fire statistics, fire analysis, accident prevention, Tehran
Procedia PDF Downloads 18624344 Design and Implementation a Virtualization Platform for Providing Smart Tourism Services
Authors: Nam Don Kim, Jungho Moon, Tae Yun Chung
Abstract:
This paper proposes an Internet of Things (IoT) based virtualization platform for providing smart tourism services. The virtualization platform provides a consistent access interface to various types of data by naming IoT devices and legacy information systems as pathnames in a virtual file system. In the other words, the IoT virtualization platform functions as a middleware which uses the metadata for underlying collected data. The proposed platform makes it easy to provide customized tourism information by using tourist locations collected by IoT devices and additionally enables to create new interactive smart tourism services focused on the tourist locations. The proposed platform is very efficient so that the provided tourism services are isolated from changes in raw data and the services can be modified or expanded without changing the underlying data structure.Keywords: internet of things (IoT), IoT platform, serviceplatform, virtual file system (VSF)
Procedia PDF Downloads 50524343 Atomic Absorption Spectroscopic Analysis of Heavy Metals in Cancerous Breast Tissues among Women in Jos, Nigeria
Authors: Opeyemi Peter Idowu
Abstract:
Breast cancer is prevalent in northern Nigerian women, most especially in Jos, Plateau State, owing to anthropogenic activities such as solid earth mineral mining as far back as 1904. In this study, atomic absorption spectrometry was used to determine the concentration of eight heavy metals (Cd, As, Cr, Cu, Fe, Pb, Ni, and Zn) in cancerous and non-cancerous breast tissues of Jos Nigerian Women. The levels of heavy metals ranged from 1.08 to 29.34 mg/kg, 0.29 to 10.76 mg/kg, 0.35 to 51.93 mg/kg, 5.15 to 62.93 mg/kg, 11.64 to 51.10 mg/kg, 0.42 to 83.16 mg/kg, 2.08 to 43.07 mg/kg and 1.67 to 71.53 mg/kg for Cd, As, Cr, Cu, Fe, Pb, Ni and Zn respectively. Using MATLAB R2016a, significant differences (tᵥ = 0.0041 - 0.0317) existed between the levels of all the heavy metals in cancerous and non-cancerous breast tissues except Fe. At 0.01 level of significance, a positive significant correlation existed between Pb and Fe, Pb and Cu, Pb and Fe, Ni and Fe, Cr and Pb, as well as Ni and Cr (r = 0.583 – 0.998) in cancerous breast tissues. Using ANOVA, significant differences also occurred in the levels of these heavy metals in cancerous breast tissues (p = 1.910510×10⁻²⁶). The relatively high levels of the cancer-induced heavy metals (Cd, As, Cr, and Pb) compared with control indicated contamination or exposure to heavy metals, which could be the major cause of cancer in these female subjects. This was evidence of contamination as a result of exposure by ingestion, inhalation, or other means to one anthropogenic activity of the other. Therapeutic measures such as gastric lavage, ascorbic acid consumption, and divalent cation treatment are all effective ways to manage heavy metal toxicity in the subjects to lower the risk of breast cancer.Keywords: breast cancer, heavy metals, spectroscopy, bio-accumulation
Procedia PDF Downloads 3024342 A Review on 3D Smart City Platforms Using Remotely Sensed Data to Aid Simulation and Urban Analysis
Authors: Slim Namouchi, Bruno Vallet, Imed Riadh Farah
Abstract:
3D urban models provide powerful tools for decision making, urban planning, and smart city services. The accuracy of this 3D based systems is directly related to the quality of these models. Since manual large-scale modeling, such as cities or countries is highly time intensive and very expensive process, a fully automatic 3D building generation is needed. However, 3D modeling process result depends on the input data, the proprieties of the captured objects, and the required characteristics of the reconstructed 3D model. Nowadays, producing 3D real-world model is no longer a problem. Remotely sensed data had experienced a remarkable increase in the recent years, especially data acquired using unmanned aerial vehicles (UAV). While the scanning techniques are developing, the captured data amount and the resolution are getting bigger and more precise. This paper presents a literature review, which aims to identify different methods of automatic 3D buildings extractions either from LiDAR or the combination of LiDAR and satellite or aerial images. Then, we present open source technologies, and data models (e.g., CityGML, PostGIS, Cesiumjs) used to integrate these models in geospatial base layers for smart city services.Keywords: CityGML, LiDAR, remote sensing, SIG, Smart City, 3D urban modeling
Procedia PDF Downloads 13724341 Structural Damage Detection via Incomplete Model Data Using Output Data Only
Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan
Abstract:
Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation
Procedia PDF Downloads 36624340 Brand Positioning in Iran: A Case Study of the Professional Soccer League
Authors: Homeira Asadi Kavan, Seyed Nasrollah Sajjadi, Mehrzade Hamidi, Hossein Rajabi, Mahdi Bigdely
Abstract:
Positioning strategies of a sports brand can create a unique impression in the minds of the fans, sponsors, and other stakeholders. In order to influence potential customer's perception in an effective and positive way, a brands positioning strategy must be unique, credible, and relevant. Many sports clubs in Iran have been struggling to implement and achieve brand positioning accomplishments, due to different reasons such as lack of experience, scarcity of experts in the sports branding, and lack of related researches in this field. This study will provide a comprehensive theoretical framework and action plan for sport managers and marketers to design and implement effective brand positioning and to enable them to be distinguishable from competing brands and sports clubs. The study instrument is interviews with sports marketing and brand experts who have been working in this industry for a minimum of 20 years. Qualitative data analysis was performed using Atlast.ti text mining software version 7 and Open, axial and selective coding were employed to uncover and systematically analyze important and complex phenomena and elements. The findings show 199 effective elements in positioning strategies in Iran Professional Soccer League. These elements are categorized into 23 concepts and sub-categories as follows: Structural prerequisites, Strategic management prerequisites, Commercial prerequisites, Major external prerequisites, Brand personality, Club symbols, Emotional aspects, Event aspects, Fans’ strategies, Marketing information strategies, Marketing management strategies, Empowerment strategies, Executive management strategies, League context, Fans’ background, Market context, Club’s organizational context, Support context, Major contexts, Political-Legal elements, Economic factors, Social factors, and Technological factors. Eventually, the study model was developed by 6 main dimensions of Causal prerequisites, Axial Phenomenon (brand position), Strategies, Context Factors, Interfering Factors, and Consequences. Based on the findings, practical recommendations and strategies are suggested that can help club managers and marketers in developing and improving their respective sport clubs, brand positioning, and activities.Keywords: brand positioning, soccer club, sport marketing, Iran professional soccer league, brand strategy
Procedia PDF Downloads 13824339 Expanding the Evaluation Criteria for a Wind Turbine Performance
Authors: Ivan Balachin, Geanette Polanco, Jiang Xingliang, Hu Qin
Abstract:
The problem of global warming raised up interest towards renewable energy sources. To reduce cost of wind energy is a challenge. Before building of wind park conditions such as: average wind speed, direction, time for each wind, probability of icing, must be considered in the design phase. Operation values used on the setting of control systems also will depend on mentioned variables. Here it is proposed a procedure to be include in the evaluation of the performance of a wind turbine, based on the amplitude of wind changes, the number of changes and their duration. A generic study case based on actual data is presented. Data analysing techniques were applied to model the power required for yaw system based on amplitude and data amount of wind changes. A theoretical model between time, amplitude of wind changes and angular speed of nacelle rotation was identified.Keywords: field data processing, regression determination, wind turbine performance, wind turbine placing, yaw system losses
Procedia PDF Downloads 39124338 Clinical Study of the Prunus dulcis (Almond) Shell Extract on Tinea capitis Infection
Authors: Nasreen Thebo, W. Shaikh, A. J. Laghari, P. Nangni
Abstract:
Prunus dulcis (Almond) shell extract is demonstrated for its biomedical applications. Shell extract prepared by soxhlet method and further characterized by UV-Visible spectrophotometer, atomic absorption spectrophotometer (AAS), FTIR, GC-MS techniques. In this study, the antifungal activity of almond shell extract was observed against clinically isolated pathogenic fungi by strip method. The antioxidant potential of crude shell extract of was evaluated by using DPPH (2-2-diphenyl-1-picryhydrazyl) and radical scavenging system. The possibility of short term therapy was only 20 days. The total antioxidant activity varied from 94.38 to 95.49% and total phenolic content was found as 4.455 mg/gm in almond shell extract. Finally the results provide a great therapeutic potential against Tinea capitis infection of scalp. Included in this study of shell extract that show scientific evidence for clinical efficacy, as well as found to be more useful in the treatment of dermatologic disorders and without any doubt it can be recommended to be Patent.Keywords: Tinea capitis, DPPH, FTIR, GC-MS therapeutic treatment
Procedia PDF Downloads 38124337 An Exhaustive All-Subsets Examination of Trade Theory on WTO Data
Authors: Masoud Charkhabi
Abstract:
We examine trade theory with this motivation. The full set of World Trade Organization data are organized into country-year pairs, each treated as a different entity. Topological Data Analysis reveals that among the 16 region and 240 region-year pairs there exists in fact a distinguishable group of region-period pairs. The generally accepted periods of shifts from dissimilar-dissimilar to similar-similar trade in goods among regions are examined from this new perspective. The period breaks are treated as cumulative and are flexible. This type of all-subsets analysis is motivated from computer science and is made possible with Lossy Compression and Graph Theory. The results question many patterns in similar-similar to dissimilar-dissimilar trade. They also show indications of economic shifts that only later become evident in other economic metrics.Keywords: econometrics, globalization, network science, topological data, analysis, trade theory, visualization, world trade
Procedia PDF Downloads 37424336 Augmented Reality in Advertising and Brand Communication: An Experimental Study
Authors: O. Mauroner, L. Le, S. Best
Abstract:
Digital technologies offer many opportunities in the design and implementation of brand communication and advertising. Augmented reality (AR) is an innovative technology in marketing communication that focuses on the fact that virtual interaction with a product ad offers additional value to consumers. AR enables consumers to obtain (almost) real product experiences by the way of virtual information even before the purchase of a certain product. Aim of AR applications in relation with advertising is in-depth examination of product characteristics to enhance product knowledge as well as brand knowledge. Interactive design of advertising provides observers with an intense examination of a specific advertising message and therefore leads to better brand knowledge. The elaboration likelihood model and the central route to persuasion strongly support this argumentation. Nevertheless, AR in brand communication is still in an initial stage and therefore scientific findings about the impact of AR on information processing and brand attitude are rare. The aim of this paper is to empirically investigate the potential of AR applications in combination with traditional print advertising. To that effect an experimental design with different levels of interactivity is built to measure the impact of interactivity of an ad on different variables o advertising effectiveness.Keywords: advertising effectiveness, augmented reality, brand communication, brand recall
Procedia PDF Downloads 30524335 Geometry of the Bandaging Procedure and Its Application while Wrapping Bandages for Treatment of Leg Ulcers
Authors: Monica Puri Sikka, Subrato Ghosh Arunangshu Mukhopadhyay
Abstract:
Appropriate compression bandaging is important for compression therapeutic medical diseases. The high compression approach employed for treating venous leg ulcers should be used correctly so that sufficient (but not excessive) pressure is applied. Bandages used to treat venous disease by compression should achieve and sustain effective levels and gradients of pressure and minimise the risk of pressure trauma. To maintain graduated compression on the limb the bandage needs to be applied at same tension for each layer from ankle to the knee. In this paper the geometry for various bandaging procedures is used to wrap each layer of bandage by marking the relaxed length of the bandage. The relaxed length is calculated depending on the stretch, average circumference of the limb on which it is to be applied and the bandaging technique to be used. This paper aims at developing a scientific approach while applying the bandage to reduce the inter operator variability in applying same tension on each successive layer of bandage.Keywords: bandaging, compression, inter operator variability, graduated, relaxed length, stretch
Procedia PDF Downloads 49924334 Using Probe Person Data for Travel Mode Detection
Authors: Muhammad Awais Shafique, Eiji Hato, Hideki Yaginuma
Abstract:
Recently GPS data is used in a lot of studies to automatically reconstruct travel patterns for trip survey. The aim is to minimize the use of questionnaire surveys and travel diaries so as to reduce their negative effects. In this paper data acquired from GPS and accelerometer embedded in smart phones is utilized to predict the mode of transportation used by the phone carrier. For prediction, Support Vector Machine (SVM) and Adaptive boosting (AdaBoost) are employed. Moreover a unique method to improve the prediction results from these algorithms is also proposed. Results suggest that the prediction accuracy of AdaBoost after improvement is relatively better than the rest.Keywords: accelerometer, AdaBoost, GPS, mode prediction, support vector machine
Procedia PDF Downloads 36224333 Building Energy Modeling for Networks of Data Centers
Authors: Eric Kumar, Erica Cochran, Zhiang Zhang, Wei Liang, Ronak Mody
Abstract:
The objective of this article was to create a modelling framework that exposes the marginal costs of shifting workloads across geographically distributed data-centers. Geographical distribution of internet services helps to optimize their performance for localized end users with lowered communications times and increased availability. However, due to the geographical and temporal effects, the physical embodiments of a service's data center infrastructure can vary greatly. In this work, we first identify that the sources of variances in the physical infrastructure primarily stem from local weather conditions, specific user traffic profiles, energy sources, and the types of IT hardware available at the time of deployment. Second, we create a traffic simulator that indicates the IT load at each data-center in the set as an approximator for user traffic profiles. Third, we implement a framework that quantifies the global level energy demands using building energy models and the traffic profiles. The results of the model provide a time series of energy demands that can be used for further life cycle analysis of internet services.Keywords: data-centers, energy, life cycle, network simulation
Procedia PDF Downloads 148