Search results for: gas distribution network
7997 Hg Anomalies and Soil Temperature Distribution to Delineate Upflow and Outflow Zone in Bittuang Geothermal Prospect Area, south Sulawesi, Indonesia
Authors: Adhitya Mangala, Yobel
Abstract:
Bittuang geothermal prospect area located at Tana Toraja district, South Sulawesi. The geothermal system of the area related to Karua Volcano eruption product. This area has surface manifestation such as fumarole, hot springs, sinter silica and mineral alteration. Those prove that there are hydrothermal activities in the subsurface. However, the project and development of the area have not implemented yet. One of the important elements in geothermal exploration is to determine upflow and outflow zone. This information very useful to identify the target for geothermal wells and development which it is a risky task. The methods used in this research were Mercury (Hg) anomalies in soil, soil and manifestation temperature distribution and fault fracture density from 93 km² research area. Hg anomalies performed to determine the distribution of hydrothermal alteration. Soil and manifestation temperature distribution were conducted to estimate heat distribution. Fault fracture density (FFD) useful to determine fracture intensity and trend from surface observation. Those deliver Hg anomaly map, soil and manifestation temperature map that combined overlayed to fault fracture density map and geological map. Then, the conceptual model made from north – south, and east – west cross section to delineate upflow and outflow zone in this area. The result shows that upflow zone located in northern – northeastern of the research area with the increase of elevation and decrease of Hg anomalies and soil temperature. The outflow zone located in southern - southeastern of the research area which characterized by chloride, chloride - bicarbonate geothermal fluid type, higher soil temperature, and Hg anomalies. The range of soil temperature distribution from 16 – 19 °C in upflow and 19 – 26.5 °C in the outflow. The range of Hg from 0 – 200 ppb in upflow and 200 – 520 ppb in the outflow. Structural control of the area show northwest – southeast trend. The boundary between upflow and outflow zone in 1550 – 1650 m elevation. This research delivers the conceptual model with innovative methods that useful to identify a target for geothermal wells, project, and development in Bittuang geothermal prospect area.Keywords: Bittuang geothermal prospect area, Hg anomalies, soil temperature, upflow and outflow zone
Procedia PDF Downloads 3337996 O-LEACH: The Problem of Orphan Nodes in the LEACH of Routing Protocol for Wireless Sensor Networks
Authors: Wassim Jerbi, Abderrahmen Guermazi, Hafedh Trabelsi
Abstract:
The optimum use of coverage in wireless sensor networks (WSNs) is very important. LEACH protocol called Low Energy Adaptive Clustering Hierarchy, presents a hierarchical clustering algorithm for wireless sensor networks. LEACH is a protocol that allows the formation of distributed cluster. In each cluster, LEACH randomly selects some sensor nodes called cluster heads (CHs). The selection of CHs is made with a probabilistic calculation. It is supposed that each non-CH node joins a cluster and becomes a cluster member. Nevertheless, some CHs can be concentrated in a specific part of the network. Thus, several sensor nodes cannot reach any CH. to solve this problem. We created an O-LEACH Orphan nodes protocol, its role is to reduce the sensor nodes which do not belong the cluster. The cluster member called Gateway receives messages from neighboring orphan nodes. The gateway informs CH having the neighboring nodes that not belong to any group. However, Gateway called (CH') attaches the orphaned nodes to the cluster and then collected the data. O-Leach enables the formation of a new method of cluster, leads to a long life and minimal energy consumption. Orphan nodes possess enough energy and seeks to be covered by the network. The principal novel contribution of the proposed work is O-LEACH protocol which provides coverage of the whole network with a minimum number of orphaned nodes and has a very high connectivity rates.As a result, the WSN application receives data from the entire network including orphan nodes. The proper functioning of the Application requires, therefore, management of intelligent resources present within each the network sensor. The simulation results show that O-LEACH performs better than LEACH in terms of coverage, connectivity rate, energy and scalability.Keywords: WSNs; routing; LEACH; O-LEACH; Orphan nodes; sub-cluster; gateway; CH’
Procedia PDF Downloads 3737995 Hand Symbol Recognition Using Canny Edge Algorithm and Convolutional Neural Network
Authors: Harshit Mittal, Neeraj Garg
Abstract:
Hand symbol recognition is a pivotal component in the domain of computer vision, with far-reaching applications spanning sign language interpretation, human-computer interaction, and accessibility. This research paper discusses the approach with the integration of the Canny Edge algorithm and convolutional neural network. The significance of this study lies in its potential to enhance communication and accessibility for individuals with hearing impairments or those engaged in gesture-based interactions with technology. In the experiment mentioned, the data is manually collected by the authors from the webcam using Python codes, to increase the dataset augmentation, is applied to original images, which makes the model more compatible and advanced. Further, the dataset of about 6000 coloured images distributed equally in 5 classes (i.e., 1, 2, 3, 4, 5) are pre-processed first to gray images and then by the Canny Edge algorithm with threshold 1 and 2 as 150 each. After successful data building, this data is trained on the Convolutional Neural Network model, giving accuracy: 0.97834, precision: 0.97841, recall: 0.9783, and F1 score: 0.97832. For user purposes, a block of codes is built in Python to enable a window for hand symbol recognition. This research, at its core, seeks to advance the field of computer vision by providing an advanced perspective on hand sign recognition. By leveraging the capabilities of the Canny Edge algorithm and convolutional neural network, this study contributes to the ongoing efforts to create more accurate, efficient, and accessible solutions for individuals with diverse communication needs.Keywords: hand symbol recognition, computer vision, Canny edge algorithm, convolutional neural network
Procedia PDF Downloads 707994 An Automated Bender Element System Used for S-Wave Velocity Tomography during Model Pile Installation
Authors: Yuxin Wu, Yu-Shing Wang, Zitao Zhang
Abstract:
A high-speed and time-lapse S-wave velocity measurement system has been built up for S-wave tomography in sand. This system is based on bender elements and applied to model pile tests in a tailor-made pressurized chamber to monitor the shear wave velocity distribution during pile installation in sand. Tactile pressure sensors are used parallel together with bender elements to monitor the stress changes during the tests. Strain gages are used to monitor the shaft resistance and toe resistance of pile. Since the shear wave velocity (Vs) is determined by the shear modulus of sand and the shaft resistance of pile is also influenced by the shear modulus of sand around the pile, the purposes of this study are to time-lapse monitor the S-wave velocity distribution change at a certain horizontal section during pile installation and to correlate the S-wave velocity distribution and shaft resistance of pile in sand.Keywords: bender element, pile, shaft resistance, shear wave velocity, tomography
Procedia PDF Downloads 4327993 On Privacy-Preserving Search in the Encrypted Domain
Authors: Chun-Shien Lu
Abstract:
Privacy-preserving query has recently received considerable attention in the signal processing and multimedia community. It is also a critical step in wireless sensor network for retrieval of sensitive data. The purposes of privacy-preserving query in both the areas of signal processing and sensor network are the same, but the similarity and difference of the adopted technologies are not fully explored. In this paper, we first review the recently developed methods of privacy-preserving query, and then describe in a comprehensive manner what we can learn from the mutual of both areas.Keywords: encryption, privacy-preserving, search, security
Procedia PDF Downloads 2607992 On the Performance Analysis of Coexistence between IEEE 802.11g and IEEE 802.15.4 Networks
Authors: Chompunut Jantarasorn, Chutima Prommak
Abstract:
This paper presents an intensive measurement studying of the network performance analysis when IEEE 802.11g Wireless Local Area Networks (WLAN) coexisting with IEEE 802.15.4 Wireless Personal Area Network (WPAN). The measurement results show that the coexistence between both networks could increase the Frame Error Rate (FER) of the IEEE 802.15.4 networks up to 60% and it could decrease the throughputs of the IEEE 802.11g networks up to 55%.Keywords: wireless performance analysis, coexistence analysis, IEEE 802.11g, IEEE 802.15.4
Procedia PDF Downloads 5587991 Effects of Particle Size Distribution on Mechanical Strength and Physical Properties in Engineered Quartz Stone
Authors: Esra Arici, Duygu Olmez, Murat Ozkan, Nurcan Topcu, Furkan Capraz, Gokhan Deniz, Arman Altinyay
Abstract:
Engineered quartz stone is a composite material comprising approximately 90 wt.% fine quartz aggregate with a variety of particle size ranges and `10 wt.% unsaturated polyester resin (UPR). In this study, the objective is to investigate the influence of particle size distribution on mechanical strength and physical properties of the engineered stone slabs. For this purpose, granular quartz with two particle size ranges of 63-200 µm and 100-300 µm were used individually and mixed with a difference in ratios of mixing. The void volume of each granular packing was measured in order to define the amount of filler; quartz powder with the size of less than 38 µm, and UPR required filling inter-particle spaces. Test slabs were prepared using vibration-compression under vacuum. The study reports that both impact strength and flexural strength of samples increased as the mix ratio of the particle size range of 63-200 µm increased. On the other hand, the values of water absorption rate, apparent density and abrasion resistance were not affected by the particle size distribution owing to vacuum compaction. It is found that increasing the mix ratio of the particle size range of 63-200 µm caused the higher porosity. This led to increasing in the amount of the binder paste needed. It is also observed that homogeneity in the slabs was improved with the particle size range of 63-200 µm.Keywords: engineered quartz stone, fine quartz aggregate, granular packing, mechanical strength, particle size distribution, physical properties.
Procedia PDF Downloads 1537990 Leveraging the Power of Dual Spatial-Temporal Data Scheme for Traffic Prediction
Authors: Yang Zhou, Heli Sun, Jianbin Huang, Jizhong Zhao, Shaojie Qiao
Abstract:
Traffic prediction is a fundamental problem in urban environment, facilitating the smart management of various businesses, such as taxi dispatching, bike relocation, and stampede alert. Most earlier methods rely on identifying the intrinsic spatial-temporal correlation to forecast. However, the complex nature of this problem entails a more sophisticated solution that can simultaneously capture the mutual influence of both adjacent and far-flung areas, with the information of time-dimension also incorporated seamlessly. To tackle this difficulty, we propose a new multi-phase architecture, DSTDS (Dual Spatial-Temporal Data Scheme for traffic prediction), that aims to reveal the underlying relationship that determines future traffic trend. First, a graph-based neural network with an attention mechanism is devised to obtain the static features of the road network. Then, a multi-granularity recurrent neural network is built in conjunction with the knowledge from a grid-based model. Subsequently, the preceding output is fed into a spatial-temporal super-resolution module. With this 3-phase structure, we carry out extensive experiments on several real-world datasets to demonstrate the effectiveness of our approach, which surpasses several state-of-the-art methods.Keywords: traffic prediction, spatial-temporal, recurrent neural network, dual data scheme
Procedia PDF Downloads 1197989 Calculate Product Carbon Footprint through the Internet of Things from Network Science
Authors: Jing Zhang
Abstract:
To reduce the carbon footprint of mankind and become more sustainable is one of the major challenges in our era. Internet of Things (IoT) mainly resolves three problems: Things to Things (T2T), Human to Things, H2T), and Human to Human (H2H). Borrowing the classification of IoT, we can find carbon prints of industries also can be divided in these three ways. Therefore, monitoring the routes of generation and circulation of products may help calculate product carbon print. This paper does not consider any technique used by IoT itself, but the ideas of it look at the connection of products. Carbon prints are like a gene or mark of a product from raw materials to the final products, which never leave the products. The contribution of this paper is to combine the characteristics of IoT and the methodology of network science to find a way to calculate the product's carbon footprint. Life cycle assessment, LCA is a traditional and main tool to calculate the carbon print of products. LCA is a traditional but main tool, which includes three kinds.Keywords: product carbon footprint, Internet of Things, network science, life cycle assessment
Procedia PDF Downloads 1197988 Privacy-Preserving Model for Social Network Sites to Prevent Unwanted Information Diffusion
Authors: Sanaz Kavianpour, Zuraini Ismail, Bharanidharan Shanmugam
Abstract:
Social Network Sites (SNSs) can be served as an invaluable platform to transfer the information across a large number of individuals. A substantial component of communicating and managing information is to identify which individual will influence others in propagating information and also whether dissemination of information in the absence of social signals about that information will be occurred or not. Classifying the final audience of social data is difficult as controlling the social contexts which transfers among individuals are not completely possible. Hence, undesirable information diffusion to an unauthorized individual on SNSs can threaten individuals’ privacy. This paper highlights the information diffusion in SNSs and moreover it emphasizes the most significant privacy issues to individuals of SNSs. The goal of this paper is to propose a privacy-preserving model that has urgent regards with individuals’ data in order to control availability of data and improve privacy by providing access to the data for an appropriate third parties without compromising the advantages of information sharing through SNSs.Keywords: anonymization algorithm, classification algorithm, information diffusion, privacy, social network sites
Procedia PDF Downloads 3237987 Enhancing the Pricing Expertise of an Online Distribution Channel
Authors: Luis N. Pereira, Marco P. Carrasco
Abstract:
Dynamic pricing is a revenue management strategy in which hotel suppliers define, over time, flexible and different prices for their services for different potential customers, considering the profile of e-consumers and the demand and market supply. This means that the fundamentals of dynamic pricing are based on economic theory (price elasticity of demand) and market segmentation. This study aims to define a dynamic pricing strategy and a contextualized offer to the e-consumers profile in order to improve the number of reservations of an online distribution channel. Segmentation methods (hierarchical and non-hierarchical) were used to identify and validate an optimal number of market segments. A profile of the market segments was studied, considering the characteristics of the e-consumers and the probability of reservation a room. In addition, the price elasticity of demand was estimated for each segment using econometric models. Finally, predictive models were used to define rules for classifying new e-consumers into pre-defined segments. The empirical study illustrates how it is possible to improve the intelligence of an online distribution channel system through an optimal dynamic pricing strategy and a contextualized offer to the profile of each new e-consumer. A database of 11 million e-consumers of an online distribution channel was used in this study. The results suggest that an appropriate policy of market segmentation in using of online reservation systems is benefit for the service suppliers because it brings high probability of reservation and generates more profit than fixed pricing.Keywords: dynamic pricing, e-consumers segmentation, online reservation systems, predictive analytics
Procedia PDF Downloads 2377986 Modelling Operational Risk Using Extreme Value Theory and Skew t-Copulas via Bayesian Inference
Authors: Betty Johanna Garzon Rozo, Jonathan Crook, Fernando Moreira
Abstract:
Operational risk losses are heavy tailed and are likely to be asymmetric and extremely dependent among business lines/event types. We propose a new methodology to assess, in a multivariate way, the asymmetry and extreme dependence between severity distributions, and to calculate the capital for Operational Risk. This methodology simultaneously uses (i) several parametric distributions and an alternative mix distribution (the Lognormal for the body of losses and the Generalized Pareto Distribution for the tail) via extreme value theory using SAS®, (ii) the multivariate skew t-copula applied for the first time for operational losses and (iii) Bayesian theory to estimate new n-dimensional skew t-copula models via Markov chain Monte Carlo (MCMC) simulation. This paper analyses a newly operational loss data set, SAS Global Operational Risk Data [SAS OpRisk], to model operational risk at international financial institutions. All the severity models are constructed in SAS® 9.2. We implement the procedure PROC SEVERITY and PROC NLMIXED. This paper focuses in describing this implementation.Keywords: operational risk, loss distribution approach, extreme value theory, copulas
Procedia PDF Downloads 6087985 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation
Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell
Abstract:
Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models
Procedia PDF Downloads 1497984 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test
Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman
Abstract:
At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. These findings need to be confirmed with a greater number of stations across other Australian states.Keywords: floods, FLIKE, probability distributions, flood frequency, outlier
Procedia PDF Downloads 4557983 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review
Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha
Abstract:
Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text
Procedia PDF Downloads 1197982 [Keynote Talk]: Knowledge Codification and Innovation Success within Digital Platforms
Authors: Wissal Ben Arfi, Lubica Hikkerova, Jean-Michel Sahut
Abstract:
This study examines interfirm networks in the digital transformation era, and in particular, how tacit knowledge codification affects innovation success within digital platforms. Hence, one of the most important features of digital transformation and innovation process outcomes is the emergence of digital platforms, as an interfirm network, at the heart of open innovation. This research aims to illuminate how digital platforms influence inter-organizational innovation through virtual team interactions and knowledge sharing practices within an interfirm network. Consequently, it contributes to the respective strategic management literature on new product development (NPD), open innovation, industrial management, and its emerging interfirm networks’ management. The empirical findings show, on the one hand, that knowledge conversion may be enhanced, especially by the socialization which seems to be the most important phase as it has played a crucial role to hold the virtual team members together. On the other hand, in the process of socialization, the tacit knowledge codification is crucial because it provides the structure needed for the interfirm network actors to interact and act to reach common goals which favor the emergence of open innovation. Finally, our results offer several conditions necessary, but not always sufficient, for interfirm managers involved in NPD and innovation concerning strategies to increasingly shape interconnected and borderless markets and business collaborations. In the digital transformation era, the need for adaptive and innovative business models as well as new and flexible network forms is becoming more significant than ever. Supported by technological advancements and digital platforms, companies could benefit from increased market opportunities and creating new markets for their innovations through alliances and collaborative strategies, as a mode of reducing or eliminating uncertainty environments or entry barriers. Consequently, an efficient and well-structured interfirm network is essential to create network capabilities, to ensure tacit knowledge sharing, to enhance organizational learning and to foster open innovation success within digital platforms.Keywords: interfirm networks, digital platform, virtual teams, open innovation, knowledge sharing
Procedia PDF Downloads 1357981 ICanny: CNN Modulation Recognition Algorithm
Authors: Jingpeng Gao, Xinrui Mao, Zhibin Deng
Abstract:
Aiming at the low recognition rate on the composite signal modulation in low signal to noise ratio (SNR), this paper proposes a modulation recognition algorithm based on ICanny-CNN. Firstly, the radar signal is transformed into the time-frequency image by Choi-Williams Distribution (CWD). Secondly, we propose an image processing algorithm using the Guided Filter and the threshold selection method, which is combined with the hole filling and the mask operation. Finally, the shallow convolutional neural network (CNN) is combined with the idea of the depth-wise convolution (Dw Conv) and the point-wise convolution (Pw Conv). The proposed CNN is designed to complete image classification and realize modulation recognition of radar signal. The simulation results show that the proposed algorithm can reach 90.83% at 0dB and 71.52% at -8dB. Therefore, the proposed algorithm has a good classification and anti-noise performance in radar signal modulation recognition and other fields.Keywords: modulation recognition, image processing, composite signal, improved Canny algorithm
Procedia PDF Downloads 1947980 Design and Implementation of Neural Network Based Controller for Self-Driven Vehicle
Authors: Hassam Muazzam
Abstract:
This paper devises an autonomous self-driven vehicle that is capable of taking a disabled person to his/her desired location using three different power sources (gasoline, solar, electric) without any control from the user, avoiding the obstacles in the way. The GPS co-ordinates of the desired location are sent to the main processing board via a GSM module. After the GPS co-ordinates are sent, the path to be followed by the vehicle is devised by Pythagoras theorem. The distance and angle between the present location and the desired location is calculated and then the vehicle starts moving in the desired direction. Meanwhile real-time data from ultrasonic sensors is fed to the board for obstacle avoidance mechanism. Ultrasonic sensors are used to quantify the distance of the vehicle from the object. The distance and position of the object is then used to make decisions regarding the direction of vehicle in order to avoid the obstacles using artificial neural network which is implemented using ATmega1280. Also the vehicle provides the feedback location at remote location.Keywords: autonomous self-driven vehicle, obstacle avoidance, desired location, pythagoras theorem, neural network, remote location
Procedia PDF Downloads 4127979 A QoS Aware Cluster Based Routing Algorithm for Wireless Mesh Network Using LZW Lossless Compression
Authors: J. S. Saini, P. P. K. Sandhu
Abstract:
The multi-hop nature of Wireless Mesh Networks and the hasty progression of throughput demands results in multi- channels and multi-radios structures in mesh networks, but the main problem of co-channels interference reduces the total throughput, specifically in multi-hop networks. Quality of Service mentions a vast collection of networking technologies and techniques that guarantee the ability of a network to make available desired services with predictable results. Quality of Service (QoS) can be directed at a network interface, towards a specific server or router's performance, or in specific applications. Due to interference among various transmissions, the QoS routing in multi-hop wireless networks is formidable task. In case of multi-channel wireless network, since two transmissions using the same channel may interfere with each other. This paper has considered the Destination Sequenced Distance Vector (DSDV) routing protocol to locate the secure and optimised path. The proposed technique also utilizes the Lempel–Ziv–Welch (LZW) based lossless data compression and intra cluster data aggregation to enhance the communication between the source and the destination. The use of clustering has the ability to aggregate the multiple packets and locates a single route using the clusters to improve the intra cluster data aggregation. The use of the LZW based lossless data compression has ability to reduce the data packet size and hence it will consume less energy, thus increasing the network QoS. The MATLAB tool has been used to evaluate the effectiveness of the projected technique. The comparative analysis has shown that the proposed technique outperforms over the existing techniques.Keywords: WMNS, QOS, flooding, collision avoidance, LZW, congestion control
Procedia PDF Downloads 3447978 Analysis and Prediction of Netflix Viewing History Using Netflixlatte as an Enriched Real Data Pool
Authors: Amir Mabhout, Toktam Ghafarian, Amirhossein Farzin, Zahra Makki, Sajjad Alizadeh, Amirhossein Ghavi
Abstract:
The high number of Netflix subscribers makes it attractive for data scientists to extract valuable knowledge from the viewers' behavioural analyses. This paper presents a set of statistical insights into viewers' viewing history. After that, a deep learning model is used to predict the future watching behaviour of the users based on previous watching history within the Netflixlatte data pool. Netflixlatte in an aggregated and anonymized data pool of 320 Netflix viewers with a length 250 000 data points recorded between 2008-2022. We observe insightful correlations between the distribution of viewing time and the COVID-19 pandemic outbreak. The presented deep learning model predicts future movie and TV series viewing habits with an average loss of 0.175.Keywords: data analysis, deep learning, LSTM neural network, netflix
Procedia PDF Downloads 2637977 Enhanced Magnetic Hyperthermic Efficiency of Ferrite Based Nanoparticles
Authors: J. P. Borah, R. D. Raland
Abstract:
Hyperthermia is one of many techniques used destroys cancerous cell. It uses the physical methods to heat certain organ or tissue delivering an adequate temperature in an appropriate period of time, to the entire tumor volume for achieving optimal therapeutic results. Magnetic Metal ferrites nanoparticles (MFe₂O₄ where M = Mn, Zn, Ni, Co, Mg, etc.) are one of the most potential candidates for hyperthermia due to their tunability, biocompatibility, chemical stability and notable ability to mediate high rate of heat induction. However, to obtain the desirable properties for these applications, it is important to optimize their chemical composition, structure and magnetic properties. These properties are mainly sensitive to cation distribution of tetrahedral and octahedral sites. Among the ferrites, zinc ferrite (ZnFe₂O₄) and Manganese ferrite ((MnFe₂O₄) is one of a strong candidate for hyperthermia application because Mn and zinc have a non-magnetic cation and therefore the magnetic property is determined only by the cation distribution of iron, which provides a better platform to manipulate or tailor the properties. In this talk, influence of doping and surfactant towards cation re-distribution leading to an enhancement of magnetic properties of ferrite nanoparticles will be demonstrated. The efficiency of heat generation in association with the enhanced magnetic property is also well discussed in this talk.Keywords: magnetic nanoparticle, hyperthermia, x-ray diffraction, TEM study
Procedia PDF Downloads 1697976 Size Distribution Effect of InAs/InP Self–Organized Quantum Dots on Optical Properties
Authors: Abdelkader Nouri, M’hamed Bouslama, Faouzi Saidi, Hassan Maaref, Michel Gendry
Abstract:
Self-organized InAs quantum dots (QDs) have been grown on 3,1% InP (110) lattice mismatched substrate by Solid Source Molecular Beam Epitaxy (SSMBE). Stranski-Krastanov mode growth has been used to create self-assembled 3D islands on InAs wetting layer (WL). The optical quality depending on the temperature and power is evaluated. In addition, Atomic Force Microscopy (AFM) images shows inhomogeneous island dots size distribution due to temperature coalescence. The quantum size effect was clearly observed through the spectra photoluminescence (PL) shape.Keywords: AFM, InAs QDs, PL, SSMBE
Procedia PDF Downloads 6927975 Strain DistributionProfiles of EDD Steel at Elevated Temperatures
Authors: Eshwara Prasad Koorapati, R. Raman Goud, Swadesh Kumar Singh
Abstract:
In the present work forming limit diagrams and strain distribution profile diagrams for extra deep drawing steel at room and elevated temperatures have been determined experimentally by conducting stretch forming experiments by using designed and fabricated warm stretch forming tooling setup. With the help of forming Limit Diagrams (FLDs) and strain distribution profile diagrams the formability of Extra Deep Drawing steel has been analyzed and co-related with mechanical properties like strain hardening coefficient (n) and normal anisotropy (r−).Mechanical properties of EDD steel from room temperature to 4500 C were determined and discussed the impact of temperature on the properties like work hardening exponent (n) anisotropy (r-) and strength coefficient of the material. Also, the fractured surfaces after stretching have undergone the some metallurgical investigations and attempt has been made to co-relate with the formability of EDD steel sheets. They are co-related and good agreement with FLDs at various temperatures.Keywords: FLD, micro hardness, strain distribution profile, stretch forming
Procedia PDF Downloads 4267974 The Role of the Rate of Profit Concept in Creating Economic Stability in Islamic Financial Market
Authors: Trisiladi Supriyanto
Abstract:
This study aims to establish a concept of rate of profit on Islamic banking that can create economic justice and stability in the Islamic Financial Market (Banking and Capital Markets). A rate of profit that creates economic justice and stability can be achieved through its role in maintaining the stability of the financial system in which there is an equitable distribution of income and wealth. To determine the role of the rate of profit as the basis of the profit sharing system implemented in the Islamic financial system, we can see the connection of rate of profit in creating financial stability, especially in the asset-liability management of financial institutions that generate a stable net margin or the rate of profit that is not affected by the ups and downs of the market risk factors, including indirect effect on interest rates. Furthermore, Islamic financial stability can be seen from the role of the rate of profit on the stability of the Islamic financial assets value that are measured from the Islamic financial asset price volatility in the Islamic Bond Market in the Capital Market.Keywords: economic justice, equitable distribution of income, equitable distribution of wealth, rate of profit, stability in the financial system
Procedia PDF Downloads 3177973 First Order Moment Bounds on DMRL and IMRL Classes of Life Distributions
Authors: Debasis Sengupta, Sudipta Das
Abstract:
The class of life distributions with decreasing mean residual life (DMRL) is well known in the field of reliability modeling. It contains the IFR class of distributions and is contained in the NBUE class of distributions. While upper and lower bounds of the reliability distribution function of aging classes such as IFR, IFRA, NBU, NBUE, and HNBUE have discussed in the literature for a long time, there is no analogous result available for the DMRL class. We obtain the upper and lower bounds for the reliability function of the DMRL class in terms of first order finite moment. The lower bound is obtained by showing that for any fixed time, the minimization of the reliability function over the class of all DMRL distributions with a fixed mean is equivalent to its minimization over a smaller class of distribution with a special form. Optimization over this restricted set can be made algebraically. Likewise, the maximization of the reliability function over the class of all DMRL distributions with a fixed mean turns out to be a parametric optimization problem over the class of DMRL distributions of a special form. The constructive proofs also establish that both the upper and lower bounds are sharp. Further, the DMRL upper bound coincides with the HNBUE upper bound and the lower bound coincides with the IFR lower bound. We also prove that a pair of sharp upper and lower bounds for the reliability function when the distribution is increasing mean residual life (IMRL) with a fixed mean. This result is proved in a similar way. These inequalities fill a long-standing void in the literature of the life distribution modeling.Keywords: DMRL, IMRL, reliability bounds, hazard functions
Procedia PDF Downloads 3997972 Students’ Online Forum Activities and Social Network Analysis in an E-Learning Environment
Authors: P. L. Cheng, I. N. Umar
Abstract:
Online discussion forum is a popular e-learning technique that allows participants to interact and construct knowledge. This study aims to examine the levels of participation, categories of participants and the structure of their interactions in a forum. A convenience sampling of one course coordinator and 23 graduate students was selected in this study. The forums’ log file and the Social Network Analysis software were used in this study. The analysis reveals 610 activities (including viewing forum’s topic, viewing discussion thread, posting a new thread, replying to other participants’ post, updating an existing thread and deleting a post) performed by them in this forum, with an average of 3.83 threads posted. Also, this forum consists of five at-risk participants, six bridging participants, four isolated participants and five leaders of information. In addition, the network density value is 0.15 and there exist five reciprocal interactions in this forum. The closeness value varied between 28 and 68 while the eigen vector centrality value varied between 0.008 and 0.39. The finding indicates that the participants tend to listen more rather than express their opinions in the forum. It was also revealed that those who actively provide supports in the discussion forum were not the same people who received the most responses from their peers. This study found that cliques do not exist in the forum and the participants are not selective to whom they response to, rather, it was based on the content of the posts made by their peers. Based upon the findings, further analysis with different method and population, larger sample size and a longer time frame are recommended.Keywords: e-learning, learning management system, online forum, social network analysis
Procedia PDF Downloads 3927971 Effect of Depth on the Distribution of Zooplankton in Wushishi Lake Minna, Niger State, Nigeria
Authors: Adamu Zubairu Mohammed, Fransis Oforum Arimoro, Salihu Maikudi Ibrahim, Y. I. Auta, T. I. Arowosegbe, Y. Abdullahi
Abstract:
The present study was conducted to evaluate the effect of depth on the distribution of zooplankton and some physicochemical parameters in Tungan Kawo Lake (Wushishi dam). Water and zooplankton samples were collected from the surface, 3.0 meters deep and 6.0 meters deep, for a period of 24 hours for six months. Standard procedures were adopted for the determination of physicochemical parameters. Results have shown significant differences in the pH, DO, BOD Hardness, Na, and Mg. A total of 1764 zooplankton were recorded, comprising 35 species, with cladocera having 18 species (58%), 14 species of copepoda (41%), 3 species of diptera (1.0%). Results show that more of the zooplankton were recorded in the 3.0 meters-deep region compared to the two other depts and a significant difference was observed in the distribution of Ceriodaphnia dubia, Daphnia laevis, and Leptodiaptomus coloradensis. Though the most abundant zooplankton was recorded in the 3.0 meters deep, Leptodiaptomus coloradesnsis, which was observed in the 6.0 meters deep as the most individual observed, this was followed by Daphnia laevis. Canonical correspondence analysis between physicochemical parameters and the zooplankton indicated a good relationship in the Lake. Ceriodaphnia dubia was found to have a good association with oxygen, sodium, and potassium, while Daphnia laevis and Leptodiaptomus coloradensis are in good relationship with magnesium and phosphorus. It was generally observed that this depth does not have much influence on the distribution of zooplankton in Wushishi Lake.Keywords: zooplankton, standard procedures, canonical correspondence analysis, Wushishi, canonical, physicochemical parameter
Procedia PDF Downloads 957970 Advancing the Hi-Tech Ecosystem in the Periphery: The Case of the Sea of Galilee Region
Authors: Yael Dubinsky, Orit Hazzan
Abstract:
There is a constant need for hi-tech innovation to be decentralized to peripheral regions. This work describes how we applied design science research (DSR) principles to define what we refer to as the Sea of Galilee (SoG) method. The goal of the SoG method is to harness existing and new technological initiatives in peripheral regions to create a socio-technological network that can initiate and maintain hi-tech activities. The SoG method consists of a set of principles, a stakeholder network, and actual hi-tech business initiatives, including their infrastructure and practices. The three cycles of DSR, the Relevance, Design, and Rigor cycles, layout a research framework to sharpen the requirements, collect data from case studies, and iteratively refine the SoG method based on the existing knowledge base. We propose that the SoG method can be deployed by regional authorities that wish to be considered smart regions (an extension of the notion of smart cities).Keywords: design science research, socio-technological initiatives, Sea of Galilee method, periphery stakeholder network, hi-tech initiatieves
Procedia PDF Downloads 1357969 HPSEC Application as a New Indicator of Nitrification Occurrence in Water Distribution Systems
Authors: Sina Moradi, Sanly Liu, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Soha Habibi, Rose Amal
Abstract:
In recent years, chloramine has been widely used for both primary and secondary disinfection. However, a major concern with the use of chloramine as a secondary disinfectant is the decay of chloramine and nitrification occurrence. The management of chloramine decay and the prevention of nitrification are critical for water utilities managing chloraminated drinking water distribution systems. The detection and monitoring of nitrification episodes is usually carried out through measuring certain water quality parameters, which are commonly referred to as indicators of nitrification. The approach taken in this study was to collect water samples from different sites throughout a drinking water distribution systems, Tailem Bend – Keith (TBK) in South Australia, and analyse the samples by high performance size exclusion chromatography (HPSEC). We investigated potential association between the water qualities from HPSEC analysis with chloramine decay and/or nitrification occurrence. MATLAB 8.4 was used for data processing of HPSEC data and chloramine decay. An increase in the absorbance signal of HPSEC profiles at λ=230 nm between apparent molecular weights of 200 to 1000 Da was observed at sampling sites that experienced rapid chloramine decay and nitrification while its absorbance signal of HPSEC profiles at λ=254 nm decreased. An increase in absorbance at λ=230 nm and AMW < 500 Da was detected for Raukkan CT (R.C.T), a location that experienced nitrification and had significantly lower chloramine residual (<0.1 mg/L). This increase in absorbance was not detected in other sites that did not experience nitrification. Moreover, the UV absorbance at 254 nm of the HPSEC spectra was lower at R.C.T. than other sites. In this study, a chloramine residual index (C.R.I) was introduced as a new indicator of chloramine decay and nitrification occurrence, and is defined based on the ratio of area underneath the HPSEC spectra at two different wavelengths of 230 and 254 nm. The C.R.I index is able to indicate DS sites that experienced nitrification and rapid chloramine loss. This index could be useful for water treatment and distribution system managers to know if nitrification is occurring at a specific location in water distribution systems.Keywords: nitrification, HPSEC, chloramine decay, chloramine residual index
Procedia PDF Downloads 3017968 Determination of Frequency Relay Setting during Distributed Generators Islanding
Authors: Tarek Kandil, Ameen Ali
Abstract:
Distributed generation (DG) has recently gained a lot of momentum in power industry due to market deregulation and environmental concerns. One of the most technical challenges facing DGs is islanding of distributed generators. The current industry practice is to disconnect all distributed generators immediately after the occurrence of islands within 200 to 350 ms after loss of main supply. To achieve such goal, each DG must be equipped with an islanding detection device. Frequency relays are one of the most commonly used loss of mains detection method. However, distribution utilities may be faced with concerns related to false operation of these frequency relays due to improper settings. The commercially available frequency relays are considering standard tight setting. This paper investigates some factors related to relays internal algorithm that contribute to their different operating responses. Further, the relay operation in the presence of multiple distributed at the same network is analyzed. Finally, the relay setting can be accurately determined based on these investigation and analysis.Keywords: frequency relay, distributed generation, islanding detection, relay setting
Procedia PDF Downloads 536