Search results for: fuzzy genetic network programming
3738 A Location-based Authentication and Key Management Scheme for Border Surveillance Wireless Sensor Networks
Authors: Walid Abdallah, Noureddine Boudriga
Abstract:
Wireless sensor networks have shown their effectiveness in the deployment of many critical applications especially in the military domain. Border surveillance is one of these applications where a set of wireless sensors are deployed along a country border line to detect illegal intrusion attempts to the national territory and report this to a control center to undergo the necessary measures. Regarding its nature, this wireless sensor network can be the target of many security attacks trying to compromise its normal operation. Particularly, in this application the deployment and location of sensor nodes are of great importance for detecting and tracking intruders. This paper proposes a location-based authentication and key distribution mechanism to secure wireless sensor networks intended for border surveillance where the key establishment is performed using elliptic curve cryptography and identity-based public key scheme. In this scheme, the public key of each sensor node will be authenticated by keys that depend on its position in the monitored area. Before establishing a pairwise key between two nodes, each one of them must verify the neighborhood location of the other node using a message authentication code (MAC) calculated on the corresponding public key and keys derived from encrypted beacon messages broadcast by anchor nodes. We show that our proposed public key authentication and key distribution scheme is more resilient to node capture and node replication attacks than currently available schemes. Also, the achievement of the key distribution between nodes in our scheme generates less communication overhead and hence increases network performances.Keywords: wireless sensor networks, border surveillance, security, key distribution, location-based
Procedia PDF Downloads 6643737 Modelling Biological Treatment of Dye Wastewater in SBR Systems Inoculated with Bacteria by Artificial Neural Network
Authors: Yasaman Sanayei, Alireza Bahiraie
Abstract:
This paper presents a systematic methodology based on the application of artificial neural networks for sequencing batch reactor (SBR). The SBR is a fill-and-draw biological wastewater technology, which is specially suited for nutrient removal. Employing reactive dye by Sphingomonas paucimobilis bacteria at sequence batch reactor is a novel approach of dye removal. The influent COD, MLVSS, and reaction time were selected as the process inputs and the effluent COD and BOD as the process outputs. The best possible result for the discrete pole parameter was a= 0.44. In orderto adjust the parameters of ANN, the Levenberg-Marquardt (LM) algorithm was employed. The results predicted by the model were compared to the experimental data and showed a high correlation with R2> 0.99 and a low mean absolute error (MAE). The results from this study reveal that the developed model is accurate and efficacious in predicting COD and BOD parameters of the dye-containing wastewater treated by SBR. The proposed modeling approach can be applied to other industrial wastewater treatment systems to predict effluent characteristics. Note that SBR are normally operated with constant predefined duration of the stages, thus, resulting in low efficient operation. Data obtained from the on-line electronic sensors installed in the SBR and from the control quality laboratory analysis have been used to develop the optimal architecture of two different ANN. The results have shown that the developed models can be used as efficient and cost-effective predictive tools for the system analysed.Keywords: artificial neural network, COD removal, SBR, Sphingomonas paucimobilis
Procedia PDF Downloads 4173736 Reverse Supply Chain Analysis of Lithium-Ion Batteries Considering Economic and Environmental Aspects
Authors: Aravind G., Arshinder Kaur, Pushpavanam S.
Abstract:
There is a strong emphasis on shifting to electric vehicles (EVs) throughout the globe for reducing the impact on global warming following the Paris climate accord. Lithium-ion batteries (LIBs) are predominantly used in EVs, and these can be a significant threat to the environment if not disposed of safely. Lithium is also a valuable resource not widely available. There are several research groups working on developing an efficient recycling process for LIBs. Two routes - pyrometallurgical and hydrometallurgical processes have been proposed for recycling LIBs. In this paper, we focus on life cycle assessment (LCA) as a tool to quantify the environmental impact of these recycling processes. We have defined the boundary of the LCA to include only the recycling phase of the end-of-life (EoL) of the battery life cycle. The analysis is done assuming ideal conditions for the hydrometallurgical and a combined hydrometallurgical and pyrometallurgical process in the inventory analysis. CML-IA method is used for quantifying the impact assessment across eleven indicators. Our results show that cathode, anode, and foil contribute significantly to the impact. The environmental impacts of both hydrometallurgical and combined recycling processes are similar across all the indicators. Further, the results of LCA are used in developing a multi-objective optimization model for the design of lithium-ion battery recycling network. Greenhouse gas emissions and cost are the two parameters minimized for the optimization study.Keywords: life cycle assessment, lithium-ion battery recycling, multi-objective optimization, network design, reverse supply chain
Procedia PDF Downloads 1623735 Denoising Convolutional Neural Network Assisted Electrocardiogram Signal Watermarking for Secure Transmission in E-Healthcare Applications
Authors: Jyoti Rani, Ashima Anand, Shivendra Shivani
Abstract:
In recent years, physiological signals obtained in telemedicine have been stored independently from patient information. In addition, people have increasingly turned to mobile devices for information on health-related topics. Major authentication and security issues may arise from this storing, degrading the reliability of diagnostics. This study introduces an approach to reversible watermarking, which ensures security by utilizing the electrocardiogram (ECG) signal as a carrier for embedding patient information. In the proposed work, Pan-Tompkins++ is employed to convert the 1D ECG signal into a 2D signal. The frequency subbands of a signal are extracted using RDWT(Redundant discrete wavelet transform), and then one of the subbands is subjected to MSVD (Multiresolution singular valued decomposition for masking. Finally, the encrypted watermark is embedded within the signal. The experimental results show that the watermarked signal obtained is indistinguishable from the original signals, ensuring the preservation of all diagnostic information. In addition, the DnCNN (Denoising convolutional neural network) concept is used to denoise the retrieved watermark for improved accuracy. The proposed ECG signal-based watermarking method is supported by experimental results and evaluations of its effectiveness. The results of the robustness tests demonstrate that the watermark is susceptible to the most prevalent watermarking attacks.Keywords: ECG, VMD, watermarking, PanTompkins++, RDWT, DnCNN, MSVD, chaotic encryption, attacks
Procedia PDF Downloads 1113734 Deep Learning-Based Object Detection on Low Quality Images: A Case Study of Real-Time Traffic Monitoring
Authors: Jean-Francois Rajotte, Martin Sotir, Frank Gouineau
Abstract:
The installation and management of traffic monitoring devices can be costly from both a financial and resource point of view. It is therefore important to take advantage of in-place infrastructures to extract the most information. Here we show how low-quality urban road traffic images from cameras already available in many cities (such as Montreal, Vancouver, and Toronto) can be used to estimate traffic flow. To this end, we use a pre-trained neural network, developed for object detection, to count vehicles within images. We then compare the results with human annotations gathered through crowdsourcing campaigns. We use this comparison to assess performance and calibrate the neural network annotations. As a use case, we consider six months of continuous monitoring over hundreds of cameras installed in the city of Montreal. We compare the results with city-provided manual traffic counting performed in similar conditions at the same location. The good performance of our system allows us to consider applications which can monitor the traffic conditions in near real-time, making the counting usable for traffic-related services. Furthermore, the resulting annotations pave the way for building a historical vehicle counting dataset to be used for analysing the impact of road traffic on many city-related issues, such as urban planning, security, and pollution.Keywords: traffic monitoring, deep learning, image annotation, vehicles, roads, artificial intelligence, real-time systems
Procedia PDF Downloads 2043733 Functional Analysis of Variants Implicated in Hearing Loss in a Cohort from Argentina: From Molecular Diagnosis to Pre-Clinical Research
Authors: Paula I. Buonfiglio, Carlos David Bruque, Lucia Salatino, Vanesa Lotersztein, Sebastián Menazzi, Paola Plazas, Ana Belén Elgoyhen, Viviana Dalamón
Abstract:
Hearing loss (HL) is the most prevalent sensorineural disorder affecting about 10% of the global population, with more than half due to genetic causes. About 1 in 500-1000 newborns present congenital HL. Most of the patients are non-syndromic with an autosomal recessive mode of inheritance. To date, more than 100 genes are related to HL. Therefore, the Whole-exome sequencing (WES) technique has become a cost-effective alternative approach for molecular diagnosis. Nevertheless, new challenges arise from the detection of novel variants, in particular missense changes, which can lead to a spectrum of genotype-to-phenotype correlations, which is not always straightforward. In this work, we aimed to identify the genetic causes of HL in isolated and familial cases by designing a multistep approach to analyze target genes related to hearing impairment. Moreover, we performed in silico and in vivo analyses in order to further study the effect of some of the novel variants identified in the hair cell function using the zebrafish model. A total of 650 patients were studied by Sanger Sequencing and Gap-PCR in GJB2 and GJB6 genes, respectively, diagnosing 15.5% of sporadic cases and 36% of familial ones. Overall, 50 different sequence variants were detected. Fifty of the undiagnosed patients with moderate HL were tested for deletions in STRC gene by Multiplex ligation-dependent probe amplification technique (MLPA), leading to 6% of diagnosis. After this initial screening, 50 families were selected to be analyzed by WES, achieving diagnosis in 44% of them. Half of the identified variants were novel. A missense variant in MYO6 gene detected in a family with postlingual HL was selected to be further analyzed. A protein modeling with AlphaFold2 software was performed, proving its pathogenic effect. In order to functionally validate this novel variant, a knockdown phenotype rescue assay in zebrafish was carried out. Injection of wild-type MYO6 mRNA in embryos rescued the phenotype, whereas using the mutant MYO6 mRNA (carrying c.2782C>A variant) had no effect. These results strongly suggest the deleterious effect of this variant on the mobility of stereocilia in zebrafish neuromasts, and hence on the auditory system. In the present work, we demonstrated that our algorithm is suitable for the sequential multigenic approach to HL in our cohort. These results highlight the importance of a combined strategy in order to identify candidate variants as well as the in silico and in vivo studies to analyze and prove their pathogenicity and accomplish a better understanding of the mechanisms underlying the physiopathology of the hearing impairment.Keywords: diagnosis, genetics, hearing loss, in silico analysis, in vivo analysis, WES, zebrafish
Procedia PDF Downloads 993732 Modeling Driving Distraction Considering Psychological-Physical Constraints
Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang
Abstract:
Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints
Procedia PDF Downloads 973731 Client Hacked Server
Authors: Bagul Abhijeet
Abstract:
Background: Client-Server model is the backbone of today’s internet communication. In which normal user can not have control over particular website or server? By using the same processing model one can have unauthorized access to particular server. In this paper, we discussed about application scenario of hacking for simple website or server consist of unauthorized way to access the server database. This application emerges to autonomously take direct access of simple website or server and retrieve all essential information maintain by administrator. In this system, IP address of server given as input to retrieve user-id and password of server. This leads to breaking administrative security of server and acquires the control of server database. Whereas virus helps to escape from server security by crashing the whole server. Objective: To control malicious attack and preventing all government website, and also find out illegal work to do hackers activity. Results: After implementing different hacking as well as non-hacking techniques, this system hacks simple web sites with normal security credentials. It provides access to server database and allow attacker to perform database operations from client machine. Above Figure shows the experimental result of this application upon different servers and provides satisfactory results as required. Conclusion: In this paper, we have presented a to view to hack the server which include some hacking as well as non-hacking methods. These algorithms and methods provide efficient way to hack server database. By breaking the network security allow to introduce new and better security framework. The terms “Hacking” not only consider for its illegal activities but also it should be use for strengthen our global network.Keywords: Hacking, Vulnerabilities, Dummy request, Virus, Server monitoring
Procedia PDF Downloads 2563730 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory
Authors: Liqin Zhang, Liang Yan
Abstract:
This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization
Procedia PDF Downloads 1293729 Scattering Operator and Spectral Clustering for Ultrasound Images: Application on Deep Venous Thrombi
Authors: Thibaud Berthomier, Ali Mansour, Luc Bressollette, Frédéric Le Roy, Dominique Mottier, Léo Fréchier, Barthélémy Hermenault
Abstract:
Deep Venous Thrombosis (DVT) occurs when a thrombus is formed within a deep vein (most often in the legs). This disease can be deadly if a part or the whole thrombus reaches the lung and causes a Pulmonary Embolism (PE). This disorder, often asymptomatic, has multifactorial causes: immobilization, surgery, pregnancy, age, cancers, and genetic variations. Our project aims to relate the thrombus epidemiology (origins, patient predispositions, PE) to its structure using ultrasound images. Ultrasonography and elastography were collected using Toshiba Aplio 500 at Brest Hospital. This manuscript compares two classification approaches: spectral clustering and scattering operator. The former is based on the graph and matrix theories while the latter cascades wavelet convolutions with nonlinear modulus and averaging operators.Keywords: deep venous thrombosis, ultrasonography, elastography, scattering operator, wavelet, spectral clustering
Procedia PDF Downloads 4833728 Product Form Bionic Design Based on Eye Tracking Data: A Case Study of Desk Lamp
Authors: Huan Lin, Liwen Pang
Abstract:
In order to reduce the ambiguity and uncertainty of product form bionic design, a product form bionic design method based on eye tracking is proposed. The eye-tracking experiment is designed to calculate the average time ranking of the specific parts of the bionic shape that the subjects are looking at. Key bionic shape is explored through the experiment and then applied to a desk lamp bionic design. During the design case, FAHP (Fuzzy Analytic Hierachy Process) and SD (Semantic Differential) method are firstly used to identify consumer emotional perception model toward desk lamp before product design. Through investigating different desk lamp design elements and consumer views, the form design factors on the desk lamp product are reflected and all design schemes are sequenced after caculation. Desk lamp form bionic design method is combined the key bionic shape extracted from eye-tracking experiment and priority of desk lamp design schemes. This study provides an objective and rational method to product form bionic design.Keywords: Bionic design; Form; Eye tracking; FAHP; Desk lamp
Procedia PDF Downloads 2343727 Multi-Agent System Based Distributed Voltage Control in Distribution Systems
Authors: A. Arshad, M. Lehtonen. M. Humayun
Abstract:
With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids
Procedia PDF Downloads 3153726 Analysis of Mutation Associated with Male Infertility in Patients and Healthy Males in the Russian Population
Authors: Svetlana Zhikrivetskaya, Nataliya Shirokova, Roman Bikanov, Elizaveta Musatova, Yana Kovaleva, Nataliya Vetrova, Ekaterina Pomerantseva
Abstract:
Nowadays there is a growing number of couples with conceiving problems due to male or female infertility. Genetic abnormalities are responsible for about 31% of all cases of male infertility. These abnormalities include both chromosomal aberrations or aneuploidies and mutations in certain genes. Chromosomal abnormalities can be easily identified, thus the development of screening panels able to reveal genetic reasons of male infertility on gene level is of current interest. There are approximately 2,000 genes involved in male fertility that is the reason why it is very important to determine the most clinically relevant in certain population and ethnic conditions. An infertility screening panel containing 48 mutations in genes AMHR2, CFTR, DNAI1, HFE, KAL1, TSSK2 and AZF locus which are the most clinically relevant for the European population according to databases NCBI and ClinVar was designed. The aim of this research was to confirm clinic relevance of these mutations in the Russian population. Genotyping was performed in 220 patients with different types of male infertility and in 57 healthy males with normozoospermia. Mutations were identified by end-point PCR with TaqMan probes in microfluidic plates. The frequency of 5 mutations in healthy males and 13 mutations in patients with infertility was revealed and estimated. The frequency of mutation c.187C>G in HFE gene was significantly lower for healthy males (8.8%) compared with patients (17.7%) and the values for the European population according to ExAc database (13.7%) and dbSNP (17.2%). Analysis of c.3454G>C, and c.1545_1546delTA mutations in the CFTR gene revealed increased frequency (0.9 and 0.2%, respectively) in patients with infertility compared with data for the European population (0.04%, respectively (ExAc, European (Non-Finnish) and for the Aggregated Populations (0.002% (ExAc), because there is no data for European population for c.1545_1546delTA mutation. The frequency of del508 mutation (CFTR) in patients (1.59%) were lower comparing with male infertility Europeans (3.34-6.25% depending on nationality) and at the same level with healthy Europeans (1.06%, ExAc, European (Non-Finnish). Analysis of c.845G>A (HFE) mutation resulted in decreased frequency in patients (1.8%) in contrast with the European population data (5.1%, respectively, ExAc, European (Non-Finnish). Moreover, obtained data revealed no statistically significant frequency difference for c.845G>A mutation (HFE) between healthy males in the Russian and the European populations. Allele frequencies of mutations c.350G>A (CFTR), c.193A>T (HFE), c.774C>T, and c.80A>G (gene TSSK2) showed no significantly difference among patients with infertility, healthy males and Europeans. Analysis of AZF locus revealed increased frequency for AZFc microdeletion in patients with male infertility. Thereby, the new data of the allele frequencies in infertility patients in the Russian population was obtained. As well as the frequency differences of mutations associated with male infertility among patients, healthy males in the Russian population and the European one were estimated. The revealed differences showed that for high effectiveness of screening panel detecting genetically caused male infertility it is very important to consider ethnic and population characteristics of patients which will be screened.Keywords: allele frequency, azoospermia, male infertility, mutation, population
Procedia PDF Downloads 3943725 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 723724 Exact Energy Spectrum and Expectation Values of the Inverse Square Root Potential Model
Authors: Benedict Ita, Peter Okoi
Abstract:
In this work, the concept of the extended Nikiforov-Uvarov technique is discussed and employed to obtain the exact bound state energy eigenvalues and the corresponding normalized eigenfunctions of the inverse square root potential. With expressions for the exact energy eigenvalues and corresponding eigenfunctions, the expressions for the expectation values of the inverse separation-squared, kinetic energy, and the momentum-squared of the potential are presented using the Hellmann Feynman theorem. For visualization, algorithms written and implemented in Python language are used to generate tables and plots for l-states of the energy eigenvalues and some expectation values. The results obtained here may find suitable applications in areas like atomic and molecular physics, chemical physics, nuclear physics, and solid-state physics.Keywords: Schrodinger equation, Nikoforov-Uvarov method, inverse square root potential, diatomic molecules, Python programming, Hellmann-Feynman theorem, second order differential equation, matrix algebra
Procedia PDF Downloads 283723 Perspectives and Outcomes of a Long and Shorter Community Mental Health Program
Authors: Danielle Klassen, Reiko Yeap, Margo Schmitt-Boshnick, Scott Oddie
Abstract:
The development of the 7-week Alberta Happiness Basics program was initiated in 2010 in response to the need for community mental health programming. This provincial wide program aims to increase overall happiness and reduce negative thoughts and feelings through a positive psychology intervention. While the 7-week program has proven effective, a shortened 4-week program has additionally been developed to address client needs. In this study, participants were interviewed to determine if the 4- and 7-week programs had similar success of producing lasting behavior change at 3, 6, and 9 months post-program. A health quality of life (HQOL) measure was also used to compare the two programs and examine patient outcomes. Quantitative and qualitative analysis showed significant improvements in HQOL and sustainable behavior change for both programs. Findings indicate that the shorter, patient-centered program was effective in increasing happiness and reducing negative thoughts and feelings.Keywords: primary care, mental health, depression, short duration
Procedia PDF Downloads 2723722 A Multicriteria Model for Sustainable Management in Agriculture
Authors: Basil Manos, Thomas Bournaris, Christina Moulogianni
Abstract:
The European agricultural policy supports all member states to apply agricultural development plans for the development of their agricultural sectors. A specific measure of the agricultural development plans refers to young people in order to enter into the agricultural sector. This measure helps the participating young farmers in achieving maximum efficiency, using methods and environmentally friendly practices, by altering their farm plans. This study applies a Multicriteria Mathematical Programming (MCDA) model for the young farmers to find farm plans that achieve the maximum gross margin and the minimum environmental impacts (less use of fertilizers and irrigation water). The analysis was made in the region of Central Macedonia, Greece, among young farmers who have participated in the “Setting up Young Farmers” measure during 2007-2010. The analysis includes the implementation of the MCDA model for the farm plans optimization and the comparison of selected environmental indicators with those of the existent situation.Keywords: multicriteria, optimum farm plans, environmental impacts, sustainable management
Procedia PDF Downloads 3443721 Water Body Detection and Estimation from Landsat Satellite Images Using Deep Learning
Authors: M. Devaki, K. B. Jayanthi
Abstract:
The identification of water bodies from satellite images has recently received a great deal of attention. Different methods have been developed to distinguish water bodies from various satellite images that vary in terms of time and space. Urban water identification issues body manifests in numerous applications with a great deal of certainty. There has been a sharp rise in the usage of satellite images to map natural resources, including urban water bodies and forests, during the past several years. This is because water and forest resources depend on each other so heavily that ongoing monitoring of both is essential to their sustainable management. The relevant elements from satellite pictures have been chosen using a variety of techniques, including machine learning. Then, a convolution neural network (CNN) architecture is created that can identify a superpixel as either one of two classes, one that includes water or doesn't from input data in a complex metropolitan scene. The deep learning technique, CNN, has advanced tremendously in a variety of visual-related tasks. CNN can improve classification performance by reducing the spectral-spatial regularities of the input data and extracting deep features hierarchically from raw pictures. Calculate the water body using the satellite image's resolution. Experimental results demonstrate that the suggested method outperformed conventional approaches in terms of water extraction accuracy from remote-sensing images, with an average overall accuracy of 97%.Keywords: water body, Deep learning, satellite images, convolution neural network
Procedia PDF Downloads 933720 Determination of Optimum Water Consumptive Using Deficit Irrigation Model for Barely: A Case Study in Arak, Iran
Authors: Mohsen Najarchi
Abstract:
This research was carried out in five fields (5-15 hectares) in Arak located in center of Iran, to determine optimum level of water consumed for Barely in four stages growth (vegetative, yield formation, flowering, and ripening). Actual evapotranspiration was calculated using measured water requirement in the fields. Five levels of water requirement equal to 50, 60, 70, 80, and 90 percents formed the treatments. To determine the optimum level of water requirement linear programming was used. The study showed 60 percent water requirement (40 percent deficit irrigation) has been the optimum level of irrigation for winter wheat in four stages of growth. Comparison between all of the treatments indicated above with normal condition (100% water requirement) shows increasing in water use efficiency. Although 40% deficit irrigation treatment lead to decrease of 38% in yield, net benefit was increasing in 11.37%. Furthermore, in comparison with normal condition, 70% of water requirement increased water use efficiency as 30%.Keywords: optimum, deficit irrigation, water use efficiency, evapotranspiration
Procedia PDF Downloads 4003719 Using Geospatial Analysis to Reconstruct the Thunderstorm Climatology for the Washington DC Metropolitan Region
Authors: Mace Bentley, Zhuojun Duan, Tobias Gerken, Dudley Bonsal, Henry Way, Endre Szakal, Mia Pham, Hunter Donaldson, Chelsea Lang, Hayden Abbott, Leah Wilcynzski
Abstract:
Air pollution has the potential to modify the lifespan and intensity of thunderstorms and the properties of lightning. Using data mining and geovisualization, we investigate how background climate and weather conditions shape variability in urban air pollution and how this, in turn, shapes thunderstorms as measured by the intensity, distribution, and frequency of cloud-to-ground lightning. A spatiotemporal analysis was conducted in order to identify thunderstorms using high-resolution lightning detection network data. Over seven million lightning flashes were used to identify more than 196,000 thunderstorms that occurred between 2006 - 2020 in the Washington, DC Metropolitan Region. Each lightning flash in the dataset was grouped into thunderstorm events by means of a temporal and spatial clustering algorithm. Once the thunderstorm event database was constructed, hourly wind direction, wind speed, and atmospheric thermodynamic data were added to the initiation and dissipation times and locations for the 196,000 identified thunderstorms. Hourly aerosol and air quality data for the thunderstorm initiation times and locations were also incorporated into the dataset. Developing thunderstorm climatologies using a lightning tracking algorithm and lightning detection network data was found to be useful for visualizing the spatial and temporal distribution of urban augmented thunderstorms in the region.Keywords: lightning, urbanization, thunderstorms, climatology
Procedia PDF Downloads 783718 A Framework for Chinese Domain-Specific Distant Supervised Named Entity Recognition
Abstract:
The Knowledge Graphs have now become a new form of knowledge representation. However, there is no consensus in regard to a plausible and definition of entities and relationships in the domain-specific knowledge graph. Further, in conjunction with several limitations and deficiencies, various domain-specific entities and relationships recognition approaches are far from perfect. Specifically, named entity recognition in Chinese domain is a critical task for the natural language process applications. However, a bottleneck problem with Chinese named entity recognition in new domains is the lack of annotated data. To address this challenge, a domain distant supervised named entity recognition framework is proposed. The framework is divided into two stages: first, the distant supervised corpus is generated based on the entity linking model of graph attention neural network; secondly, the generated corpus is trained as the input of the distant supervised named entity recognition model to train to obtain named entities. The link model is verified in the ccks2019 entity link corpus, and the F1 value is 2% higher than that of the benchmark method. The re-pre-trained BERT language model is added to the benchmark method, and the results show that it is more suitable for distant supervised named entity recognition tasks. Finally, it is applied in the computer field, and the results show that this framework can obtain domain named entities.Keywords: distant named entity recognition, entity linking, knowledge graph, graph attention neural network
Procedia PDF Downloads 983717 The Necessity to Standardize Procedures of Providing Engineering Geological Data for Designing Road and Railway Tunneling Projects
Authors: Atefeh Saljooghi Khoshkar, Jafar Hassanpour
Abstract:
One of the main problems of the design stage relating to many tunneling projects is the lack of an appropriate standard for the provision of engineering geological data in a predefined format. In particular, this is more reflected in highway and railroad tunnel projects in which there is a number of tunnels and different professional teams involved. In this regard, comprehensive software needs to be designed using the accepted methods in order to help engineering geologists to prepare standard reports, which contain sufficient input data for the design stage. Regarding this necessity, applied software has been designed using macro capabilities and Visual Basic programming language (VBA) through Microsoft Excel. In this software, all of the engineering geological input data, which are required for designing different parts of tunnels, such as discontinuities properties, rock mass strength parameters, rock mass classification systems, boreability classification, the penetration rate, and so forth, can be calculated and reported in a standard format.Keywords: engineering geology, rock mass classification, rock mechanic, tunnel
Procedia PDF Downloads 853716 Sexual Cognitive Behavioral Therapy: Psychological Performance and Openness to Experience
Authors: Alireza Monzavi Chaleshtari, Mahnaz Aliakbari Dehkordi, Amin Asadi Hieh, Majid Kazemnezhad
Abstract:
This research was conducted with the aim of determining the effectiveness of sexual cognitive behavioral therapy on psychological performance and openness to experience in women. The type of research was experimental in the form of pre-test-post-test. The statistical population of this research was made up of all working and married women with membership in the researcher's Instagram social network who had problems in marital-sexual relationships (N=900). From the statistical community, which includes working and married women who are members of the researcher's Instagram social network who have problems in marital-sexual relationships, there are 30 people including two groups (15 people in the experimental group and 15 people in the control group) as available sampling and selected randomly. They were placed in two experimental and control groups. The anxiety, stress, and depression scale (DASS) and the Costa and McCree personality questionnaire were used to collect data, and the cognitive behavioral therapy protocol of Dr. Mehrnaz Ali Akbari was used for the treatment sessions. To analyze the data, the covariance test was used in the SPSS22 software environment. The results showed that sexual cognitive behavioral therapy has a positive and significant effect on psychological performance and openness to experience in women. Conclusion: It can be concluded that interventions such as cognitive-behavioral sex can be used to treat marital problems.Keywords: sexual cognitive behavioral therapy, psychological function, openness to experience, women
Procedia PDF Downloads 813715 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques
Authors: Tomas Trainys, Algimantas Venckauskas
Abstract:
Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.
Procedia PDF Downloads 1553714 Emerging Cyber Threats and Cognitive Vulnerabilities: Cyberterrorism
Authors: Oludare Isaac Abiodun, Esther Omolara Abiodun
Abstract:
The purpose of this paper is to demonstrate that cyberterrorism is existing and poses a threat to computer security and national security. Nowadays, people have become excitedly dependent upon computers, phones, the Internet, and the Internet of things systems to share information, communicate, conduct a search, etc. However, these network systems are at risk from a different source that is known and unknown. These network systems risk being caused by some malicious individuals, groups, organizations, or governments, they take advantage of vulnerabilities in the computer system to hawk sensitive information from people, organizations, or governments. In doing so, they are engaging themselves in computer threats, crime, and terrorism, thereby making the use of computers insecure for others. The threat of cyberterrorism is of various forms and ranges from one country to another country. These threats include disrupting communications and information, stealing data, destroying data, leaking, and breaching data, interfering with messages and networks, and in some cases, demanding financial rewards for stolen data. Hence, this study identifies many ways that cyberterrorists utilize the Internet as a tool to advance their malicious mission, which negatively affects computer security and safety. One could identify causes for disparate anomaly behaviors and the theoretical, ideological, and current forms of the likelihood of cyberterrorism. Therefore, for a countermeasure, this paper proposes the use of previous and current computer security models as found in the literature to help in countering cyberterrorismKeywords: cyberterrorism, computer security, information, internet, terrorism, threat, digital forensic solution
Procedia PDF Downloads 1003713 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence
Procedia PDF Downloads 1483712 RFID Laptop Monitoring and Management System
Authors: Francis E. Idachaba, Sarah Uyimeh Tommy
Abstract:
This paper describes the design of an RFID laptop monitoring and management system. Laptops embedded with RFID chips are monitored and tracked to provide a monitoring system for the purpose of tracking as well as monitoring movement of the laptops in and out of a building. The proposed system is implemented with both hardware and software components. The hardware architecture consists of RFID passive tag, RFID module (reader), and a server hosting the application and database. The RFID readers are distributed at major exits of a building or premises. The tags are programmed with owner laptop details are concealed in the laptops. The software architecture consists of application software that has the APIs (Applications Programming Interface) necessary to interface the RFID system with the PC, to achieve automated laptop monitoring system. A friendly graphic user interface (GUI) and a database that saves all readings and owners details. The system is capable of reducing laptop theft especially in students’ hostels as laptops can be monitored as they are taken either in or out of the building.Keywords: asset tracking, GUI, laptop monitoring, radio frequency identification, passive tags
Procedia PDF Downloads 3943711 A Case Study of Bee Algorithm for Ready Mixed Concrete Problem
Authors: Wuthichai Wongthatsanekorn, Nuntana Matheekrieangkrai
Abstract:
This research proposes Bee Algorithm (BA) to optimize Ready Mixed Concrete (RMC) truck scheduling problem from single batch plant to multiple construction sites. This problem is considered as an NP-hard constrained combinatorial optimization problem. This paper provides the details of the RMC dispatching process and its related constraints. BA was then developed to minimize total waiting time of RMC trucks while satisfying all constraints. The performance of BA is then evaluated on two benchmark problems (3 and 5construction sites) according to previous researchers. The simulation results of BA are compared in term of efficiency and accuracy with Genetic Algorithm (GA) and all problems show that BA approach outperforms GA in term of efficiency and accuracy to obtain optimal solution. Hence, BA approach could be practically implemented to obtain the best schedule.Keywords: bee colony optimization, ready mixed concrete problem, ruck scheduling, multiple construction sites
Procedia PDF Downloads 3883710 Optimal Design and Simulation of a Grid-Connected Photovoltaic (PV) Power System for an Electrical Department in University of Tripoli, Libya
Authors: Mustafa Al-Refai
Abstract:
This paper presents the optimal design and simulation of a grid-connected Photovoltaic (PV) system to supply electric power to meet the energy demand by Electrical Department in University of Tripoli Libya. Solar radiation is the key factor determining electricity produced by photovoltaic (PV) systems. This paper is designed to develop a novel method to calculate the solar photovoltaic generation capacity on the basis of Mean Global Solar Radiation data available for Tripoli Libya and finally develop a system design of possible plant capacity for the available roof area. MatLab/Simulink Programming tools and monthly average solar radiation data are used for this design and simulation. The specifications of equipments are provided based on the availability of the components in the market. Simulation results and analyses are presented to validate the proposed system configuration.Keywords: photovoltaic (PV), grid, Simulink, solar energy, power plant, solar irradiation
Procedia PDF Downloads 3033709 Efficient Reconstruction of DNA Distance Matrices Using an Inverse Problem Approach
Authors: Boris Melnikov, Ye Zhang, Dmitrii Chaikovskii
Abstract:
We continue to consider one of the cybernetic methods in computational biology related to the study of DNA chains. Namely, we are considering the problem of reconstructing the not fully filled distance matrix of DNA chains. When applied in a programming context, it is revealed that with a modern computer of average capabilities, creating even a small-sized distance matrix for mitochondrial DNA sequences is quite time-consuming with standard algorithms. As the size of the matrix grows larger, the computational effort required increases significantly, potentially spanning several weeks to months of non-stop computer processing. Hence, calculating the distance matrix on conventional computers is hardly feasible, and supercomputers are usually not available. Therefore, we started publishing our variants of the algorithms for calculating the distance between two DNA chains; then, we published algorithms for restoring partially filled matrices, i.e., the inverse problem of matrix processing. In this paper, we propose an algorithm for restoring the distance matrix for DNA chains, and the primary focus is on enhancing the algorithms that shape the greedy function within the branches and boundaries method framework.Keywords: DNA chains, distance matrix, optimization problem, restoring algorithm, greedy algorithm, heuristics
Procedia PDF Downloads 123