Search results for: elliptic curve digital signature algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7376

Search results for: elliptic curve digital signature algorithm

746 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle

Authors: W. Hamaidia, T. Zebbiche

Abstract:

This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.

Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error

Procedia PDF Downloads 150
745 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 206
744 A QoS Aware Cluster Based Routing Algorithm for Wireless Mesh Network Using LZW Lossless Compression

Authors: J. S. Saini, P. P. K. Sandhu

Abstract:

The multi-hop nature of Wireless Mesh Networks and the hasty progression of throughput demands results in multi- channels and multi-radios structures in mesh networks, but the main problem of co-channels interference reduces the total throughput, specifically in multi-hop networks. Quality of Service mentions a vast collection of networking technologies and techniques that guarantee the ability of a network to make available desired services with predictable results. Quality of Service (QoS) can be directed at a network interface, towards a specific server or router's performance, or in specific applications. Due to interference among various transmissions, the QoS routing in multi-hop wireless networks is formidable task. In case of multi-channel wireless network, since two transmissions using the same channel may interfere with each other. This paper has considered the Destination Sequenced Distance Vector (DSDV) routing protocol to locate the secure and optimised path. The proposed technique also utilizes the Lempel–Ziv–Welch (LZW) based lossless data compression and intra cluster data aggregation to enhance the communication between the source and the destination. The use of clustering has the ability to aggregate the multiple packets and locates a single route using the clusters to improve the intra cluster data aggregation. The use of the LZW based lossless data compression has ability to reduce the data packet size and hence it will consume less energy, thus increasing the network QoS. The MATLAB tool has been used to evaluate the effectiveness of the projected technique. The comparative analysis has shown that the proposed technique outperforms over the existing techniques.

Keywords: WMNS, QOS, flooding, collision avoidance, LZW, congestion control

Procedia PDF Downloads 338
743 A Comprehensive Framework for Fraud Prevention and Customer Feedback Classification in E-Commerce

Authors: Samhita Mummadi, Sree Divya Nagalli, Harshini Vemuri, Saketh Charan Nakka, Sumesh K. J.

Abstract:

One of the most significant challenges faced by people in today’s digital era is an alarming increase in fraudulent activities on online platforms. The fascination with online shopping to avoid long queues in shopping malls, the availability of a variety of products, and home delivery of goods have paved the way for a rapid increase in vast online shopping platforms. This has had a major impact on increasing fraudulent activities as well. This loop of online shopping and transactions has paved the way for fraudulent users to commit fraud. For instance, consider a store that orders thousands of products all at once, but what’s fishy about this is the massive number of items purchased and their transactions turning out to be fraud, leading to a huge loss for the seller. Considering scenarios like these underscores the urgent need to introduce machine learning approaches to combat fraud in online shopping. By leveraging robust algorithms, namely KNN, Decision Trees, and Random Forest, which are highly effective in generating accurate results, this research endeavors to discern patterns indicative of fraudulent behavior within transactional data. Introducing a comprehensive solution to this problem in order to empower e-commerce administrators in timely fraud detection and prevention is the primary motive and the main focus. In addition to that, sentiment analysis is harnessed in the model so that the e-commerce admin can tailor to the customer’s and consumer’s concerns, feedback, and comments, allowing the admin to improve the user’s experience. The ultimate objective of this study is to ramp up online shopping platforms against fraud and ensure a safer shopping experience. This paper underscores a model accuracy of 84%. All the findings and observations that were noted during our work lay the groundwork for future advancements in the development of more resilient and adaptive fraud detection systems, which will become crucial as technologies continue to evolve.

Keywords: behavior analysis, feature selection, Fraudulent pattern recognition, imbalanced classification, transactional anomalies

Procedia PDF Downloads 27
742 A Comparative Analysis of Innovation Maturity Models: Towards the Development of a Technology Management Maturity Model

Authors: Nikolett Deutsch, Éva Pintér, Péter Bagó, Miklós Hetényi

Abstract:

Strategic technology management has emerged and evolved parallelly with strategic management paradigms. It focuses on the opportunity for organizations operating mainly in technology-intensive industries to explore and exploit technological capabilities upon which competitive advantage can be obtained. As strategic technology management involves multifunction within an organization, requires broad and diversified knowledge, and must be developed and implemented with business objectives to enable a firm’s profitability and growth, excellence in strategic technology management provides unique opportunities for organizations in terms of building a successful future. Accordingly, a framework supporting the evaluation of the technological readiness level of management can significantly contribute to developing organizational competitiveness through a better understanding of strategic-level capabilities and deficiencies in operations. In the last decade, several innovation maturity assessment models have appeared and become designated management tools that can serve as references for future practical approaches expected to be used by corporate leaders, strategists, and technology managers to understand and manage technological capabilities and capacities. The aim of this paper is to provide a comprehensive review of the state-of-the-art innovation maturity frameworks, to investigate the critical lessons learned from their application, to identify the similarities and differences among the models, and identify the main aspects and elements valid for the field and critical functions of technology management. To this end, a systematic literature review was carried out considering the relevant papers and articles published in highly ranked international journals around the 27 most widely known innovation maturity models from four relevant digital sources. Key findings suggest that despite the diversity of the given models, there is still room for improvement regarding the common understanding of innovation typologies, the full coverage of innovation capabilities, and the generalist approach to the validation and practical applicability of the structure and content of the models. Furthermore, the paper proposes an initial structure by considering the maturity assessment of the technological capacities and capabilities - i.e., technology identification, technology selection, technology acquisition, technology exploitation, and technology protection - covered by strategic technology management.

Keywords: innovation capabilities, innovation maturity models, technology audit, technology management, technology management maturity models

Procedia PDF Downloads 61
741 Iot-Based Interactive Patient Identification and Safety Management System

Authors: Jonghoon Chun, Insung Kim, Jonghyun Lim, Gun Ro

Abstract:

We believe that it is possible to provide a solution to reduce patient safety accidents by displaying correct medical records and prescription information through interactive patient identification. Our system is based on the use of smart bands worn by patients and these bands communicate with the hybrid gateways which understand both BLE and Wifi communication protocols. Through the convergence of low-power Bluetooth (BLE) and hybrid gateway technology, which is one of short-range wireless communication technologies, we implement ‘Intelligent Patient Identification and Location Tracking System’ to prevent medical malfunction frequently occurring in medical institutions. Based on big data and IOT technology using MongoDB, smart band (BLE, NFC function) and hybrid gateway, we develop a system to enable two-way communication between medical staff and hospitalized patients as well as to store locational information of the patients in minutes. Based on the precise information provided using big data systems, such as location tracking and movement of in-hospital patients wearing smart bands, our findings include the fact that a patient-specific location tracking algorithm can more efficiently operate HIS (Hospital Information System) and other related systems. Through the system, we can always correctly identify patients using identification tags. In addition, the system automatically determines whether the patient is a scheduled for medical service by the system in use at the medical institution, and displays the appropriateness of the medical treatment and the medical information (medical record and prescription information) on the screen and voice. This work was supported in part by the Korea Technology and Information Promotion Agency for SMEs (TIPA) grant funded by the Korean Small and Medium Business Administration (No. S2410390).

Keywords: BLE, hybrid gateway, patient identification, IoT, safety management, smart band

Procedia PDF Downloads 311
740 A Linearly Scalable Family of Swapped Networks

Authors: Richard Draper

Abstract:

A supercomputer can be constructed from identical building blocks which are small parallel processors connected by a network referred to as the local network. The routers have unused ports which are used to interconnect the building blocks. These connections are referred to as the global network. The address space has a global and a local component (g, l). The conventional way to connect the building blocks is to connect (g, l) to (g’,l). If there are K blocks, this requires K global ports in each router. If a block is of size M, the result is a machine with KM routers having diameter two. To increase the size of the machine to 2K blocks, each router connects to only half of the other blocks. The result is a larger machine but also one with greater diameter. This is a crude description of how the network of the CRAY XC® is designed. In this paper, a family of interconnection networks using routers with K global and M local ports is defined. Coordinates are (c,d, p) and the global connections are (c,d,p)↔(c’,p,d) which swaps p and d. The network is denoted D3(K,M) and is called a Swapped Dragonfly. D3(K,M) has KM2 routers and has diameter three, regardless of the size of K. To produce a network of size KM2 conventionally, diameter would be an increasing function of K. The family of Swapped Dragonflies has other desirable properties: 1) D3(K,M) scales linearly in K and quadratically in M. 2) If L < K, D3(K,M) contains many copies of D3(L,M). 3) If L < M, D3(K,M) contains many copies of D3(K,L). 4) D3(K,M) can perform an all-to-all exchange in KM2+KM time which is only slightly more than the time to do a one-to-all. This paper makes several contributions. It is the first time that a swap has been used to define a linearly scalable family of networks. Structural properties of this new family of networks are thoroughly examined. A synchronizing packet header is introduced. It specifies the path to be followed and it makes it possible to define highly parallel communication algorithm on the network. Among these is an all-to-all exchange in time KM2+KM. To demonstrate the effectiveness of the swap properties of the network of the CRAY XC® and D3(K,16) are compared.

Keywords: all-to-all exchange, CRAY XC®, Dragonfly, interconnection network, packet switching, swapped network, topology

Procedia PDF Downloads 121
739 Establishing a Computational Screening Framework to Identify Environmental Exposures Using Untargeted Gas-Chromatography High-Resolution Mass Spectrometry

Authors: Juni C. Kim, Anna R. Robuck, Douglas I. Walker

Abstract:

The human exposome, which includes chemical exposures over the lifetime and their effects, is now recognized as an important measure for understanding human health; however, the complexity of the data makes the identification of environmental chemicals challenging. The goal of our project was to establish a computational workflow for the improved identification of environmental pollutants containing chlorine or bromine. Using the “pattern. search” function available in the R package NonTarget, we wrote a multifunctional script that searches mass spectral clusters from untargeted gas-chromatography high-resolution mass spectrometry (GC-HRMS) for the presence of spectra consistent with chlorine and bromine-containing organic compounds. The “pattern. search” function was incorporated into a different function that allows the evaluation of clusters containing multiple analyte fragments, has multi-core support, and provides a simplified output identifying listing compounds containing chlorine and/or bromine. The new function was able to process 46,000 spectral clusters in under 8 seconds and identified over 150 potential halogenated spectra. We next applied our function to a deidentified dataset from patients diagnosed with primary biliary cholangitis (PBC), primary sclerosing cholangitis (PSC), and healthy controls. Twenty-two spectra corresponded to potential halogenated compounds in the PSC and PBC dataset, including six significantly different in PBC patients, while four differed in PSC patients. We have developed an improved algorithm for detecting halogenated compounds in GC-HRMS data, providing a strategy for prioritizing exposures in the study of human disease.

Keywords: exposome, metabolome, computational metabolomics, high-resolution mass spectrometry, exposure, pollutants

Procedia PDF Downloads 138
738 YOLO-IR: Infrared Small Object Detection in High Noise Images

Authors: Yufeng Li, Yinan Ma, Jing Wu, Chengnian Long

Abstract:

Infrared object detection aims at separating small and dim target from clutter background and its capabilities extend beyond the limits of visible light, making it invaluable in a wide range of applications such as improving safety, security, efficiency, and functionality. However, existing methods are usually sensitive to the noise of the input infrared image, leading to a decrease in target detection accuracy and an increase in the false alarm rate in high-noise environments. To address this issue, an infrared small target detection algorithm called YOLO-IR is proposed in this paper to improve the robustness to high infrared noise. To address the problem that high noise significantly reduces the clarity and reliability of target features in infrared images, we design a soft-threshold coordinate attention mechanism to improve the model’s ability to extract target features and its robustness to noise. Since the noise may overwhelm the local details of the target, resulting in the loss of small target features during depth down-sampling, we propose a deep and shallow feature fusion neck to improve the detection accuracy. In addition, because the generalized Intersection over Union (IoU)-based loss functions may be sensitive to noise and lead to unstable training in high-noise environments, we introduce a Wasserstein-distance based loss function to improve the training of the model. The experimental results show that YOLO-IR achieves a 5.0% improvement in recall and a 6.6% improvement in F1-score over existing state-of-art model.

Keywords: infrared small target detection, high noise, robustness, soft-threshold coordinate attention, feature fusion

Procedia PDF Downloads 73
737 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models

Authors: Ethan James

Abstract:

Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.

Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina

Procedia PDF Downloads 181
736 Pervasive Computing: Model to Increase Arable Crop Yield through Detection Intrusion System (IDS)

Authors: Idowu Olugbenga Adewumi, Foluke Iyabo Oluwatoyinbo

Abstract:

Presently, there are several discussions on the food security with increase in yield of arable crop throughout the world. This article, briefly present research efforts to create digital interfaces to nature, in particular to area of crop production in agriculture with increase in yield with interest on pervasive computing. The approach goes beyond the use of sensor networks for environmental monitoring but also by emphasizing the development of a system architecture that detect intruder (Intrusion Process) which reduce the yield of the farmer at the end of the planting/harvesting period. The objective of the work is to set a model for setting up the hand held or portable device for increasing the quality and quantity of arable crop. This process incorporates the use of infrared motion image sensor with security alarm system which can send a noise signal to intruder on the farm. This model of the portable image sensing device in monitoring or scaring human, rodent, birds and even pests activities will reduce post harvest loss which will increase the yield on farm. The nano intelligence technology was proposed to combat and minimize intrusion process that usually leads to low quality and quantity of produce from farm. Intranet system will be in place with wireless radio (WLAN), router, server, and client computer system or hand held device e.g PDAs or mobile phone. This approach enables the development of hybrid systems which will be effective as a security measure on farm. Since, precision agriculture has developed with the computerization of agricultural production systems and the networking of computerized control systems. In the intelligent plant production system of controlled greenhouses, information on plant responses, measured by sensors, is used to optimize the system. Further work must be carry out on modeling using pervasive computing environment to solve problems of agriculture, as the use of electronics in agriculture will attracts more youth involvement in the industry.

Keywords: pervasive computing, intrusion detection, precision agriculture, security, arable crop

Procedia PDF Downloads 403
735 A Rapid Prototyping Tool for Suspended Biofilm Growth Media

Authors: Erifyli Tsagkari, Stephanie Connelly, Zhaowei Liu, Andrew McBride, William Sloan

Abstract:

Biofilms play an essential role in treating water in biofiltration systems. The biofilm morphology and function are inextricably linked to the hydrodynamics of flow through a filter, and yet engineers rarely explicitly engineer this interaction. We develop a system that links computer simulation and 3-D printing to optimize and rapidly prototype filter media to optimize biofilm function with the hypothesis that biofilm function is intimately linked to the flow passing through the filter. A computational model that numerically solves the incompressible time-dependent Navier Stokes equations coupled to a model for biofilm growth and function is developed. The model is imbedded in an optimization algorithm that allows the model domain to adapt until criteria on biofilm functioning are met. This is applied to optimize the shape of filter media in a simple flow channel to promote biofilm formation. The computer code links directly to a 3-D printer, and this allows us to prototype the design rapidly. Its validity is tested in flow visualization experiments and by microscopy. As proof of concept, the code was constrained to explore a small range of potential filter media, where the medium acts as an obstacle in the flow that sheds a von Karman vortex street that was found to enhance the deposition of bacteria on surfaces downstream. The flow visualization and microscopy in the 3-D printed realization of the flow channel validated the predictions of the model and hence its potential as a design tool. Overall, it is shown that the combination of our computational model and the 3-D printing can be effectively used as a design tool to prototype filter media to optimize biofilm formation.

Keywords: biofilm, biofilter, computational model, von karman vortices, 3-D printing.

Procedia PDF Downloads 142
734 Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images

Authors: Lamees Nasser, Yago Diez, Robert Martí, Joan Martí, Ibrahim Sadek

Abstract:

Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively.

Keywords: Automated 3D Breast Ultrasound, Eigenvalues of Hessian matrix, Nipple detection, Rib detection

Procedia PDF Downloads 330
733 An Ensemble System of Classifiers for Computer-Aided Volcano Monitoring

Authors: Flavio Cannavo

Abstract:

Continuous evaluation of the status of potentially hazardous volcanos plays a key role for civil protection purposes. The importance of monitoring volcanic activity, especially for energetic paroxysms that usually come with tephra emissions, is crucial not only for exposures to the local population but also for airline traffic. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the high nonlinearity of the complex and coupled volcanic dynamics leads to a large variety of different volcanic behaviors. Moreover, continuously measured parameters (e.g. seismic, deformation, infrasonic and geochemical signals) are often not able to fully explain the ongoing phenomenon, thus making the fast volcano state assessment a very puzzling task for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, here we introduce a system based on an ensemble of data-driven classifiers to infer automatically the ongoing volcano status from all the available different kind of measurements. The system consists of a heterogeneous set of independent classifiers, each one built with its own data and algorithm. Each classifier gives an output about the volcanic status. The ensemble technique allows weighting the single classifier output to combine all the classifications into a single status that maximizes the performance. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision-making purposes.

Keywords: Bayesian networks, expert system, mount Etna, volcano monitoring

Procedia PDF Downloads 246
732 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method

Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota

Abstract:

The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.

Keywords: BOS method, underexpanded jet, abel transformation, density field visualization

Procedia PDF Downloads 78
731 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study

Authors: Mohamed H. Khalil

Abstract:

Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.

Keywords: GIS Web-Based, base-map, water network, decision support system

Procedia PDF Downloads 96
730 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients

Authors: Ainura Tursunalieva, Irene Hudson

Abstract:

Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.

Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence

Procedia PDF Downloads 152
729 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 369
728 Production of High Purity Cellulose Products from Sawdust Waste Material

Authors: Simiksha Balkissoon, Jerome Andrew, Bruce Sithole

Abstract:

Approximately half of the wood processed in the Forestry, Timber, Pulp and Paper (FTPP) sector is accumulated as waste. The concept of a “green economy” encourages industries to employ revolutionary, transformative technologies to eliminate waste generation by exploring the development of new value chains. The transition towards an almost paperless world driven by the rise of digital media has resulted in a decline in traditional paper markets, prompting the FTTP sector to reposition itself and expand its product offerings by unlocking the potential of value-adding opportunities from renewable resources such as wood to generate revenue and mitigate its environmental impact. The production of valuable products from wood waste such as sawdust has been extensively explored in recent years. Wood components such as lignin, cellulose and hemicelluloses, which can be extracted selectively by chemical processing, are suitable candidates for producing numerous high-value products. In this study, a novel approach to produce high-value cellulose products, such as dissolving wood pulp (DWP), from sawdust was developed. DWP is a high purity cellulose product used in several applications such as pharmaceutical, textile, food, paint and coatings industries. The proposed approach demonstrates the potential to eliminate several complex processing stages, such as pulping and bleaching, which are associated with traditional commercial processes to produce high purity cellulose products such as DWP, making it less chemically energy and water-intensive. The developed process followed the path of experimentally designed lab tests evaluating typical processing conditions such as residence time, chemical concentrations, liquid-to-solid ratios and temperature, followed by the application of suitable purification steps. Characterization of the product from the initial stage was conducted using commercially available DWP grades as reference materials. The chemical characteristics of the products thus far have shown similar properties to commercial products, making the proposed process a promising and viable option for the production of DWP from sawdust.

Keywords: biomass, cellulose, chemical treatment, dissolving wood pulp

Procedia PDF Downloads 185
727 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network

Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka

Abstract:

Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.

Keywords: aggregation, consumption, data gathering, efficiency

Procedia PDF Downloads 497
726 Classification on Statistical Distributions of a Complex N-Body System

Authors: David C. Ni

Abstract:

Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.

Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification

Procedia PDF Downloads 309
725 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 167
724 One or More Building Information Modeling Managers in France: The Confusion of the Kind

Authors: S. Blanchard, D. Beladjine, K. Beddiar

Abstract:

Since 2015, the arrival of BIM in the building sector in France has turned the corporation world upside down. Not only constructive practices have been impacted, but also the uses and the men who have undergone important changes. Thus, the new collaborative mode generated by the BIM and the digital model has challenged the supremacy of some construction actors because the process involves working together taking into account the needs of other contributors. New BIM tools have emerged and actors in the act of building must take ownership of them. It is in this context that under the impetus of a European directive and the French government's encouragement of new missions and job profiles have. Moreover, concurrent engineering requires that each actor can advance at the same time as the others, at the whim of the information that reaches him, and the information he has to transmit. However, in the French legal system around public procurement, things are not planned in this direction. Also, a consequent evolution must take place to adapt to the methodology. The new missions generated by the BIM in France require a good mastery of the tools and the process. Also, to meet the objectives of the BIM approach, it is possible to define a typical job profile around the BIM, adapted to the various sectors concerned. The multitude of job offers using the same terms with very different objectives and the complexity of the proposed missions motivated by our approach. In order to reinforce exchanges with professionals or specialists, we carried out a statistical study to answer this problem. Five topics are discussed around the business area: the BIM in the company, the function (business), software used and BIM missions practiced (39 items). About 1400 professionals were interviewed. These people work in companies (micro businesses, SMEs, and Groups) of construction, engineering offices or, architectural agencies. 77% of respondents have the status of employees. All participants are graduated in their trade, the majority having level 1. Most people have less than a year of experience in BIM, but some have 10 years. The results of our survey help to understand why it is not possible to define a single type of BIM Manager. Indeed, the specificities of the companies are so numerous and complex and the missions so varied, that there is not a single model for a function. On the other hand, it was possible to define 3 main professions around the BIM (Manager, Coordinator and Modeler) and 3 main missions for the BIM Manager (deployment of the method, assistance to project management and management of a project).

Keywords: BIM manager, BIM modeler, BIM coordinator, project management

Procedia PDF Downloads 163
723 Evaluation of Surface Water and Groundwater Quality in Parts of Umunneochi Southeast, Nigeria

Authors: Joshua Chima Chizoba, Wisdom Izuchukwu Uzoma, Elizabeth Ifeyiwa Okoyeh

Abstract:

Water cannot be optimally used and sustained unless the quality is periodically assessed. The study area Umunneochi and environs are located in south eastern part of Nigeria. It stretches geographically from latitudes 50501N to 60000N and longitudes 70201E to 70301. The major geologic formations in the area include the Asu River group, Nkporo Shale, and Ajali Sandstone. The aim of this study is to evaluate the hydrochemical characteristics of surface and ground water sources in parts of Umunneochi and environs in order to establish portability of the water sources for drinking, domestic and irrigation purposes. A total of 15 samples were collected randomly from streams, springs and wells. The samples were analyzed for physicochemical parameters and heavy metals using handheld digital kits, photometer, titration method and Atomic Absorption Spectrophotometer (AAS) following acceptable standards. The obtained analytical data were interpreted, and results were compared with World Health Organization (WHO) standard. The concentration of pH, SO42-and Cl- range from 5.81 mg/l – 6.07 mg/l, 41.93 mg/l – 142.95 mg/l and 20.00 mg/l – 111 mg/l respectively, while Pb and Zn revealed a relative low mean concentration of 0.14 mg/l and 0.40 mg/l, which are all within (WHO) permissible limits except pH. About 27% of the samples are moderately hard. This is attributed to the mining activities in the areas. The abundance of cations and anions in the area are in the order of K+>Na+>Mg2+>Ca2+ and SO4->Cl->HCO3->NO3-, respectively. Chloride, bicarbonate, and nitrate are all within the permissible limits. 13.33% of the total samples contain Sulphate above the standard permissible limits. The values of calculated Water Quality Index (WQI) are less than 50 indicating excellent water. The predominant water-type in the study area is Na-Cl water type and mixed Ca-Mg-Cl water type based on the sample plots on the Piper diagram. The Sodium Absorption Ratio (SAR) calculations showed excellent water for consumption and also good water for irrigation purpose with low sodium and alkalinity ratio respectively. Government water projects are recommended in the area for sustainable domestic and agricultural water supply to ease the stress of water supply problems.

Keywords: groundwater, hydrochemical, physichochemical, water-type, sodium adsorption ratio

Procedia PDF Downloads 130
722 Anatomical and Histological Analysis of Salpinx and Ovary in Anatolian Wild Goat (Capra aegagrus aegagrus)

Authors: Gulseren Kirbas, Mushap Kuru, Buket Bakir, Ebru Karadag Sari

Abstract:

Capra (mountain goat) is a genus comprising nine species. The domestic goat (C. aegagrus hircus) is a subspecies of the wild goat that is domesticated. This study aimed to determine the anatomical structure of the salpinx and ovary of the Anatolian wild goat (C. aegagrus aegagrus). Animals that were taken to the Kafkas University Wildlife Rescue and Rehabilitation Center, Kars, Turkey, because of various reasons, such as traffic accidents and firearm injuries, were used in this study. The salpinges and ovaries of four wild goats of similar ages, which could not be rescued by the Center despite all interventions, were dissected. Measurements were taken from the right-left salpinx and ovary using digital calipers. The weights of each ovary and salpinx were measured using a precision scale (min: 0.0001 g − max: 220 g, code: XB220A; Precisa, Swiss). The histological structure of the tissues was examined after weighing the organs. The tissue samples were fixed in 10% formaldehyde for 24 h. Then a routine procedure was applied, and the tissues were embedded in paraffin. Mallory’s modified triple staining was used to demonstrate the general structure of the salpinx. The salpinx was found to consist of three different regions (infundibulum, ampulla, and isthmus). These regions consisted of tunica mucosa, tunica muscularis, and tunica serosa. The prismatic epithelial cells were observed in the lamina epithelialis of tunica mucosa in every region, but the prismatic fimbrae cells occurred most in the infundibulum. The ampulla was distinguished by its many mucosal folds. It was the longest region of the salpinx and was joined to the isthmus via the ampullary–isthmus junction. Isthmus was the caudal end of the salpinx joined to the uterus and had the thickest tunica muscularis compared with the other regions. The mean length of the ovary was 13.22 ± 1.27 mm, width was 8.46 ± 0.88 mm, the thickness was 5.67 ± 0.79 mm, and weight was 0.59 ± 0.17 g. The average length of the salpinx was 58.11 ± 14.02 mm, width was 0.80 ± 0.22 mm, the thickness was 0.41 ± 0.01 mm, and weight was 0.30 ± 0.08 g. In conclusion, the Anatolian wild goat, which is included in wildlife diversity in Turkey, has been disappearing due to illegal and uncontrolled hunting as well as traffic accidents in recent years. These findings are believed to contribute to the literature.

Keywords: Anatolian wild goat, anatomy, ovary, salpinx

Procedia PDF Downloads 224
721 Integer Programming: Domain Transformation in Nurse Scheduling Problem.

Authors: Geetha Baskaran, Andrzej Barjiela, Rong Qu

Abstract:

Motivation: Nurse scheduling is a complex combinatorial optimization problem. It is also known as NP-hard. It needs an efficient re-scheduling to minimize some trade-off of the measures of violation by reducing selected constraints to soft constraints with measurements of their violations. Problem Statement: In this paper, we extend our novel approach to solve the nurse scheduling problem by transforming it through Information Granulation. Approach: This approach satisfies the rules of a typical hospital environment based on a standard benchmark problem. Generating good work schedules has a great influence on nurses' working conditions which are strongly related to the level of a quality health care. Domain transformation that combines the strengths of operation research and artificial intelligence was proposed for the solution of the problem. Compared to conventional methods, our approach involves judicious grouping (information granulation) of shifts types’ that transforms the original problem into a smaller solution domain. Later these schedules from the smaller problem domain are converted back into the original problem domain by taking into account the constraints that could not be represented in the smaller domain. An Integer Programming (IP) package is used to solve the transformed scheduling problem by expending the branch and bound algorithm. We have used the GNU Octave for Windows to solve this problem. Results: The scheduling problem has been solved in the proposed formalism resulting in a high quality schedule. Conclusion: Domain transformation represents departure from a conventional one-shift-at-a-time scheduling approach. It offers an advantage of efficient and easily understandable solutions as well as offering deterministic reproducibility of the results. We note, however, that it does not guarantee the global optimum.

Keywords: domain transformation, nurse scheduling, information granulation, artificial intelligence, simulation

Procedia PDF Downloads 397
720 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study

Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin

Abstract:

Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.

Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)

Procedia PDF Downloads 601
719 Context-Aware Point-Of-Interests Recommender Systems Using Integrated Sentiment and Network Analysis

Authors: Ho Yeon Park, Kyoung-Jae Kim

Abstract:

Recently, user’s interests for location-based social network service increases according to the advances of social web and location-based technologies. It may be easy to recommend preferred items if we can use user’s preference, context and social network information simultaneously. In this study, we propose context-aware POI (point-of-interests) recommender systems using location-based network analysis and sentiment analysis which consider context, social network information and implicit user’s preference score. We propose a context-aware POI recommendation system consisting of three sub-modules and an integrated recommendation system of them. First, we will develop a recommendation module based on network analysis. This module combines social network analysis and cluster-indexing collaboration filtering. Next, this study develops a recommendation module using social singular value decomposition (SVD) and implicit SVD. In this research, we will develop a recommendation module that can recommend preference scores based on the frequency of POI visits of user in POI recommendation process by using social and implicit SVD which can reflect implicit feedback in collaborative filtering. We also develop a recommendation module using them that can estimate preference scores based on the recommendation. Finally, this study will propose a recommendation module using opinion mining and emotional analysis using data such as reviews of POIs extracted from location-based social networks. Finally, we will develop an integration algorithm that combines the results of the three recommendation modules proposed in this research. Experimental results show the usefulness of the proposed model in relation to the recommended performance.

Keywords: sentiment analysis, network analysis, recommender systems, point-of-interests, business analytics

Procedia PDF Downloads 250
718 Accessible Mobile Augmented Reality App for Art Social Learning Based on Technology Acceptance Model

Authors: Covadonga Rodrigo, Felipe Alvarez Arrieta, Ana Garcia Serrano

Abstract:

Mobile augmented reality technologies have become very popular in the last years in the educational field. Researchers have studied how these technologies improve the engagement of the student and better understanding of the process of learning. But few studies have been made regarding the accessibility of these new technologies applied to digital humanities. The goal of our research is to develop an accessible mobile application with embedded augmented reality main characters of the art work and gamification events accompanied by multi-sensorial activities. The mobile app conducts a learning itinerary around the artistic work, driving the user experience in and out the museum. The learning design follows the inquiry-based methodology and social learning conducted through interaction with social networks. As for the software application, it’s being user-centered designed, following the universal design for learning (UDL) principles to assure the best level of accessibility for all. The mobile augmented reality application starts recognizing a marker from a masterpiece of a museum using the camera of the mobile device. The augmented reality information (history, author, 3D images, audio, quizzes) is shown through virtual main characters that come out from the art work. To comply with the UDL principles, we use a version of the technology acceptance model (TAM) to study the easiness of use and perception of usefulness, extended by the authors with specific indicators for measuring accessibility issues. Following a rapid prototype method for development, the first app has been recently produced, fulfilling the EN 301549 standard and W3C accessibility guidelines for mobile development. A TAM-based web questionnaire with 214 participants with different kinds of disabilities was previously conducted to gather information and feedback on user preferences from the artistic work on the Museo del Prado, the level of acceptance of technology innovations and the easiness of use of mobile elements. Preliminary results show that people with disabilities felt very comfortable while using mobile apps and internet connection. The augmented reality elements seem to offer an added value highly engaging and motivating for the students.

Keywords: H.5.1 (multimedia information systems), artificial, augmented and virtual realities, evaluation/methodology

Procedia PDF Downloads 135
717 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models

Authors: Azadeh Jafari, Robert G. Owens

Abstract:

In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.

Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics

Procedia PDF Downloads 361