Search results for: deep feed forward neural network
5997 A Survey on Traditional Mac Layer Protocols in Cognitive Wireless Mesh Networks
Authors: Anusha M., V. Srikanth
Abstract:
Maximizing spectrum usage and numerous applications of the wireless communication networks have forced to a high interest of available spectrum. Cognitive Radio control its receiver and transmitter features exactly so that they can utilize the vacant approved spectrum without impacting the functionality of the principal licensed users. The Use of various channels assists to address interferences thereby improves the whole network efficiency. The MAC protocol in cognitive radio network explains the spectrum usage by interacting with multiple channels among the users. In this paper we studied about the architecture of cognitive wireless mesh network and traditional TDMA dependent MAC method to allocate channels dynamically. The majority of the MAC protocols suggested in the research are operated on Common-Control-Channel (CCC) to handle the services between Cognitive Radio secondary users. In this paper, an extensive study of Multi-Channel Multi-Radios or frequency range channel allotment and continually synchronized TDMA scheduling are shown in summarized way.Keywords: TDMA, MAC, multi-channel, multi-radio, WMN’S, cognitive radios
Procedia PDF Downloads 5615996 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation
Authors: Min L. Stewart, Patrick Johnston
Abstract:
Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding
Procedia PDF Downloads 1115995 A Highly Efficient Broadcast Algorithm for Computer Networks
Authors: Ganesh Nandakumaran, Mehmet Karaata
Abstract:
A wave is a distributed execution, often made up of a broadcast phase followed by a feedback phase, requiring the participation of all the system processes before a particular event called decision is taken. Wave algorithms with one initiator such as the 1-wave algorithm have been shown to be very efficient for broadcasting messages in tree networks. Extensions of this algorithm broadcasting a sequence of waves using a single initiator have been implemented in algorithms such as the m-wave algorithm. However as the network size increases, having a single initiator adversely affects the message delivery times to nodes further away from the initiator. As a remedy, broadcast waves can be allowed to be initiated by multiple initiator nodes distributed across the network to reduce the completion time of broadcasts. These waves initiated by one or more initiator processes form a collection of waves covering the entire network. Solutions to global-snapshots, distributed broadcast and various synchronization problems can be solved efficiently using waves with multiple concurrent initiators. In this paper, we propose the first stabilizing multi-wave sequence algorithm implementing waves started by multiple initiator processes such that every process in the network receives at least one sequence of broadcasts. Due to being stabilizing, the proposed algorithm can withstand transient faults and do not require initialization. We view a fault as a transient fault if it perturbs the configuration of the system but not its program.Keywords: distributed computing, multi-node broadcast, propagation of information with feedback and cleaning (PFC), stabilization, wave algorithms
Procedia PDF Downloads 5045994 The Omicron Variant BA.2.86.1 of SARS- 2 CoV-2 Demonstrates an Altered Interaction Network and Dynamic Features to Enhance the Interaction with the hACE2
Authors: Taimur Khan, Zakirullah, Muhammad Shahab
Abstract:
The SARS-CoV-2 variant BA.2.86 (Omicron) has emerged with unique mutations that may increase its transmission and infectivity. This study investigates how these mutations alter the Omicron receptor-binding domain's interaction network and dynamic properties (RBD) compared to the wild-type virus, focusing on its binding affinity to the human ACE2 (hACE2) receptor. Protein-protein docking and all-atom molecular dynamics simulations were used to analyze structural and dynamic differences. Despite the structural similarity to the wild-type virus, the Omicron variant exhibits a distinct interaction network involving new residues that enhance its binding capacity. The dynamic analysis reveals increased flexibility in the RBD, particularly in loop regions crucial for hACE2 interaction. Mutations significantly alter the secondary structure, leading to greater flexibility and conformational adaptability compared to the wild type. Binding free energy calculations confirm that the Omicron RBD has a higher binding affinity (-70.47 kcal/mol) to hACE2 than the wild-type RBD (-61.38 kcal/mol). These results suggest that the altered interaction network and enhanced dynamics of the Omicron variant contribute to its increased infectivity, providing insights for the development of targeted therapeutics and vaccines.Keywords: SARS-CoV-2, molecular dynamic simulation, receptor binding domain, vaccine
Procedia PDF Downloads 225993 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization
Authors: Aitor Bilbao, Dragos Axinte, John Billingham
Abstract:
The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation
Procedia PDF Downloads 2755992 Introduce a New Model of Anomaly Detection in Computer Networks Using Artificial Immune Systems
Authors: Mehrshad Khosraviani, Faramarz Abbaspour Leyl Abadi
Abstract:
The fundamental component of the computer network of modern information society will be considered. These networks are connected to the network of the internet generally. Due to the fact that the primary purpose of the Internet is not designed for, in recent decades, none of these networks in many of the attacks has been very important. Today, for the provision of security, different security tools and systems, including intrusion detection systems are used in the network. A common diagnosis system based on artificial immunity, the designer, the Adhasaz Foundation has been evaluated. The idea of using artificial safety methods in the diagnosis of abnormalities in computer networks it has been stimulated in the direction of their specificity, there are safety systems are similar to the common needs of m, that is non-diagnostic. For example, such methods can be used to detect any abnormalities, a variety of attacks, being memory, learning ability, and Khodtnzimi method of artificial immune algorithm pointed out. Diagnosis of the common system of education offered in this paper using only the normal samples is required for network and any additional data about the type of attacks is not. In the proposed system of positive selection and negative selection processes, selection of samples to create a distinction between the colony of normal attack is used. Copa real data collection on the evaluation of ij indicates the proposed system in the false alarm rate is often low compared to other ir methods and the detection rate is in the variations.Keywords: artificial immune system, abnormality detection, intrusion detection, computer networks
Procedia PDF Downloads 3535991 A 5G Architecture Based to Dynamic Vehicular Clustering Enhancing VoD Services Over Vehicular Ad hoc Networks
Authors: Lamaa Sellami, Bechir Alaya
Abstract:
Nowadays, video-on-demand (VoD) applications are becoming one of the tendencies driving vehicular network users. In this paper, considering the unpredictable vehicle density, the unexpected acceleration or deceleration of the different cars included in the vehicular traffic load, and the limited radio range of the employed communication scheme, we introduce the “Dynamic Vehicular Clustering” (DVC) algorithm as a new scheme for video streaming systems over VANET. The proposed algorithm takes advantage of the concept of small cells and the introduction of wireless backhauls, inspired by the different features and the performance of the Long Term Evolution (LTE)- Advanced network. The proposed clustering algorithm considers multiple characteristics such as the vehicle’s position and acceleration to reduce latency and packet loss. Therefore, each cluster is counted as a small cell containing vehicular nodes and an access point that is elected regarding some particular specifications.Keywords: video-on-demand, vehicular ad-hoc network, mobility, vehicular traffic load, small cell, wireless backhaul, LTE-advanced, latency, packet loss
Procedia PDF Downloads 1405990 Shoring System Selection for Deep Excavation
Authors: Faouzi Ahtchi-Ali, Marcus Vitiello
Abstract:
A study was conducted in the east region of the Middle East to assess the constructability of a shoring system for a 12-meter deep excavation. Several shoring systems were considered in this study including secant concrete piling, contiguous concrete piling, and sheet-piling. The excavation was carried out in a very dense sand with the groundwater level located at 3 meters below ground surface. The study included conducting a pilot test for each shoring system listed above. The secant concrete piling included overlapping concrete piles to a depth of 16 meters. Drilling method with full steel casing was utilized to install the concrete piles. The verticality of the piles was a concern for the overlap. The contiguous concrete piling required the installation of micro-piles to seal the gap between the concrete piles. This method revealed that the gap between the piles was not fully sealed as observed by the groundwater penetration to the excavation. The sheet-piling method required pre-drilling due to the high blow count of the penetrated layer of saturated sand. This study concluded that the sheet-piling method with pre-drilling was the most cost effective and recommended a method for the shoring system.Keywords: excavation, shoring system, middle east, Drilling method
Procedia PDF Downloads 4685989 Citation Analysis of New Zealand Court Decisions
Authors: Tobias Milz, L. Macpherson, Varvara Vetrova
Abstract:
The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.Keywords: case citation network, citation analysis, network analysis, Neo4j
Procedia PDF Downloads 1075988 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 1645987 Beneficiation of Pulp and Paper Mill Sludge for the Generation of Single Cell Protein for Fish Farming
Authors: Lucretia Ramnath
Abstract:
Fishmeal is extensively used for fish farming but is an expensive fish feed ingredient. A cheaper alternate to fishmeal is single cell protein (SCP) which can be cultivated on fermentable sugars recovered from organic waste streams such as pulp and paper mill sludge (PPMS). PPMS has a high cellulose content, thus is suitable for glucose recovery through enzymatic hydrolysis but is hampered by lignin and ash. To render PPMS amenable for enzymatic hydrolysis, the PPMS waspre-treated to produce a glucose-rich hydrolysate which served as a feed stock for the production of fungal SCP. The PPMS used in this study had the following composition: 72.77% carbohydrates, 8.6% lignin, and 18.63% ash. The pre-treatments had no significant effect on lignin composition but had a substantial effect on carbohydrate and ash content. Enzymatic hydrolysis of screened PPMS was previously optimized through response surface methodology (RSM) and 2-factorial design. The optimized protocol resulted in a hydrolysate containing 46.1 g/L of glucose, of which 86% was recovered after downstream processing by passing through a 100-mesh sieve (38 µm pore size). Vogel’s medium supplemented with 10 g/L hydrolysate successfully supported the growth of Fusarium venenatum, conducted using standard growth conditions; pH 6, 200 rpm, 2.88 g/L ammonium phosphate, 25°C. A maximum F. venenatum biomass of 45 g/L was produced with a yield coefficient of 4.67. Pulp and paper mill sludge hydrolysate contained approximately five times more glucose than what was needed for SCP production and served as a suitable carbon source. We have shown that PPMS can be successfully beneficiated for SCP production.Keywords: pulp and paper waste, fungi, single cell protein, hydrolysate
Procedia PDF Downloads 2075986 Supervised/Unsupervised Mahalanobis Algorithm for Improving Performance for Cyberattack Detection over Communications Networks
Authors: Radhika Ranjan Roy
Abstract:
Deployment of machine learning (ML)/deep learning (DL) algorithms for cyberattack detection in operational communications networks (wireless and/or wire-line) is being delayed because of low-performance parameters (e.g., recall, precision, and f₁-score). If datasets become imbalanced, which is the usual case for communications networks, the performance tends to become worse. Complexities in handling reducing dimensions of the feature sets for increasing performance are also a huge problem. Mahalanobis algorithms have been widely applied in scientific research because Mahalanobis distance metric learning is a successful framework. In this paper, we have investigated the Mahalanobis binary classifier algorithm for increasing cyberattack detection performance over communications networks as a proof of concept. We have also found that high-dimensional information in intermediate features that are not utilized as much for classification tasks in ML/DL algorithms are the main contributor to the state-of-the-art of improved performance of the Mahalanobis method, even for imbalanced and sparse datasets. With no feature reduction, MD offers uniform results for precision, recall, and f₁-score for unbalanced and sparse NSL-KDD datasets.Keywords: Mahalanobis distance, machine learning, deep learning, NS-KDD, local intrinsic dimensionality, chi-square, positive semi-definite, area under the curve
Procedia PDF Downloads 785985 Blocking of Random Chat Apps at Home Routers for Juvenile Protection in South Korea
Authors: Min Jin Kwon, Seung Won Kim, Eui Yeon Kim, Haeyoung Lee
Abstract:
Numerous anonymous chat apps that help people to connect with random strangers have been released in South Korea. However, they become a serious problem for young people since young people often use them for channels of prostitution or sexual violence. Although ISPs in South Korea are responsible for making inappropriate content inaccessible on their networks, they do not block traffic of random chat apps since 1) the use of random chat apps is entirely legal. 2) it is reported that they use HTTP proxy blocking so that non-HTTP traffic cannot be blocked. In this paper, we propose a service model that can block random chat apps at home routers. A service provider manages a blacklist that contains blocked apps’ information. Home routers that subscribe the service filter the traffic of the apps out using deep packet inspection. We have implemented a prototype of the proposed model, including a centralized server providing the blacklist, a Raspberry Pi-based home router that can filter traffic of the apps out, and an Android app used by the router’s administrator to locally customize the blacklist.Keywords: deep packet inspection, internet filtering, juvenile protection, technical blocking
Procedia PDF Downloads 3495984 Supporting Densification through the Planning and Implementation of Road Infrastructure in the South African Context
Authors: K. Govender, M. Sinclair
Abstract:
This paper demonstrates a proof of concept whereby shorter trips and land use densification can be promoted through an alternative approach to planning and implementation of road infrastructure in the South African context. It briefly discusses how the development of the Compact City concept relies on a combination of promoting shorter trips and densification through a change in focus in road infrastructure provision. The methodology developed in this paper uses a traffic model to test the impact of synthesized deterrence functions on congestion locations in the road network through the assignment of traffic on the study network. The results from this study demonstrate that intelligent planning of road infrastructure can indeed promote reduced urban sprawl, increased residential density and mixed-use areas which are supported by an efficient public transport system; and reduced dependence on the freeway network with a fixed road infrastructure budget. The study has resonance for all cities where urban sprawl is seemingly unstoppable.Keywords: compact cities, densification, road infrastructure planning, transportation modelling
Procedia PDF Downloads 1785983 Drilling Quantification and Bioactivity of Machinable Hydroxyapatite : Yttrium phosphate Bioceramic Composite
Authors: Rupita Ghosh, Ritwik Sarkar, Sumit K. Pal, Soumitra Paul
Abstract:
The use of Hydroxyapatite bioceramics as restorative implants is widely known. These materials can be manufactured by pressing and sintering route to a particular shape. However machining processes are still a basic requirement to give a near net shape to those implants for ensuring dimensional and geometrical accuracy. In this context, optimising the machining parameters is an important factor to understand the machinability of the materials and to reduce the production cost. In the present study a method has been optimized to produce true particulate drilled composite of Hydroxyapatite Yttrium Phosphate. The phosphates are used in varying ratio for a comparative study on the effect of flexural strength, hardness, machining (drilling) parameters and bioactivity.. The maximum flexural strength and hardness of the composite that could be attained are 46.07 MPa and 1.02 GPa respectively. Drilling is done with a conventional radial drilling machine aided with dynamometer with high speed steel (HSS) and solid carbide (SC) drills. The effect of variation in drilling parameters (cutting speed and feed), cutting tool, batch composition on torque, thrust force and tool wear are studied. It is observed that the thrust force and torque varies greatly with the increase in the speed, feed and yttrium phosphate content in the composite. Significant differences in the thrust and torque are noticed due to the change of the drills as well. Bioactivity study is done in simulated body fluid (SBF) upto 28 days. The growth of the bone like apatite has become denser with the increase in the number of days for all the composition of the composites and it is comparable to that of the pure hydroxyapatite.Keywords: Bioactivity, Drilling, Hydroxyapatite, Yttrium Phosphate
Procedia PDF Downloads 3005982 Metal Extraction into Ionic Liquids and Hydrophobic Deep Eutectic Mixtures
Authors: E. E. Tereshatov, M. Yu. Boltoeva, V. Mazan, M. F. Volia, C. M. Folden III
Abstract:
Room temperature ionic liquids (RTILs) are a class of liquid organic salts with melting points below 20 °C that are considered to be environmentally friendly ‘designers’ solvents. Pure hydrophobic ILs are known to extract metallic species from aqueous solutions. The closest analogues of ionic liquids are deep eutectic solvents (DESs), which are a eutectic mixture of at least two compounds with a melting point lower than that of each individual component. DESs are acknowledged to be attractive for organic synthesis and metal processing. Thus, these non-volatile and less toxic compounds are of interest for critical metal extraction. The US Department of Energy and the European Commission consider indium as a key metal. Its chemical homologue, thallium, is also an important material for some applications and environmental safety. The aim of this work is to systematically investigate In and Tl extraction from aqueous solutions into pure fluorinated ILs and hydrophobic DESs. The dependence of the Tl extraction efficiency on the structure and composition of the ionic liquid ions, metal oxidation state, and initial metal and aqueous acid concentrations have been studied. The extraction efficiency of the TlXz3–z anionic species (where X = Cl– and/or Br–) is greater for ionic liquids with more hydrophobic cations. Unexpectedly high distribution ratios (> 103) of Tl(III) were determined even by applying a pure ionic liquid as receiving phase. An improved mathematical model based on ion exchange and ion pair formation mechanisms has been developed to describe the co-extraction of two different anionic species, and the relative contributions of each mechanism have been determined. The first evidence of indium extraction into new quaternary ammonium- and menthol-based hydrophobic DESs from hydrochloric and oxalic acid solutions with distribution ratios up to 103 will be provided. Data obtained allow us to interpret the mechanism of thallium and indium extraction into ILs and DESs media. The understanding of Tl and In chemical behavior in these new media is imperative for the further improvement of separation and purification of these elements.Keywords: deep eutectic solvents, indium, ionic liquids, thallium
Procedia PDF Downloads 2415981 The Study of ZigBee Protocol Application in Wireless Networks
Authors: Ardavan Zamanpour, Somaieh Yassari
Abstract:
ZigBee protocol network was developed in industries and MIT laboratory in 1997. ZigBee is a wireless networking technology by alliance ZigBee which is designed to low board and low data rate applications. It is a Protocol which connects between electrical devises with very low energy and cost. The first version of IEEE 802.15.4 which was formed ZigBee was based on 2.4GHZ MHZ 912MHZ 868 frequency band. The name of system is often reminded random directions that bees (BEES) traversing during pollination of products. Such as alloy of the ways in which information packets are traversed within the mesh network. This paper aims to study the performance and effectiveness of this protocol in wireless networks.Keywords: ZigBee, protocol, wireless, networks
Procedia PDF Downloads 3695980 Time's Arrow and Entropy: Violations to the Second Law of Thermodynamics Disrupt Time Perception
Authors: Jason Clarke, Michaela Porubanova, Angela Mazzoli, Gulsah Kut
Abstract:
What accounts for our perception that time inexorably passes in one direction, from the past to the future, the so-called arrow of time, given that the laws of physics permit motion in one temporal direction to also happen in the reverse temporal direction? Modern physics says that the reason for time’s unidirectional physical arrow is the relationship between time and entropy, the degree of disorder in the universe, which is evolving from low entropy (high order; thermal disequilibrium) toward high entropy (high disorder; thermal equilibrium), the second law of thermodynamics. Accordingly, our perception of the direction of time, from past to future, is believed to emanate as a result of the natural evolution of entropy from low to high, with low entropy defining our notion of ‘before’ and high entropy defining our notion of ‘after’. Here we explored this proposed relationship between entropy and the perception of time’s arrow. We predicted that if the brain has some mechanism for detecting entropy, whose output feeds into processes involved in constructing our perception of the direction of time, presentation of violations to the expectation that low entropy defines ‘before’ and high entropy defines ‘after’ would alert this mechanism, leading to measurable behavioral effects, namely a disruption in duration perception. To test this hypothesis, participants were shown briefly-presented (1000 ms or 500 ms) computer-generated visual dynamic events: novel 3D shapes that were seen either to evolve from whole figures into parts (low to high entropy condition) or were seen in the reverse direction: parts that coalesced into whole figures (high to low entropy condition). On each trial, participants were instructed to reproduce the duration of their visual experience of the stimulus by pressing and releasing the space bar. To ensure that attention was being deployed to the stimuli, a secondary task was to report the direction of the visual event (forward or reverse motion). Participants completed 60 trials. As predicted, we found that duration reproduction was significantly longer for the high to low entropy condition compared to the low to high entropy condition (p=.03). This preliminary data suggests the presence of a neural mechanism that detects entropy, which is used by other processes to construct our perception of the direction of time or time’s arrow.Keywords: time perception, entropy, temporal illusions, duration perception
Procedia PDF Downloads 1725979 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network
Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson
Abstract:
The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0
Procedia PDF Downloads 1825978 Multi-Layer Perceptron and Radial Basis Function Neural Network Models for Classification of Diabetic Retinopathy Disease Using Video-Oculography Signals
Authors: Ceren Kaya, Okan Erkaymaz, Orhan Ayar, Mahmut Özer
Abstract:
Diabetes Mellitus (Diabetes) is a disease based on insulin hormone disorders and causes high blood glucose. Clinical findings determine that diabetes can be diagnosed by electrophysiological signals obtained from the vital organs. 'Diabetic Retinopathy' is one of the most common eye diseases resulting on diabetes and it is the leading cause of vision loss due to structural alteration of the retinal layer vessels. In this study, features of horizontal and vertical Video-Oculography (VOG) signals have been used to classify non-proliferative and proliferative diabetic retinopathy disease. Twenty-five features are acquired by using discrete wavelet transform with VOG signals which are taken from 21 subjects. Two models, based on multi-layer perceptron and radial basis function, are recommended in the diagnosis of Diabetic Retinopathy. The proposed models also can detect level of the disease. We show comparative classification performance of the proposed models. Our results show that proposed the RBF model (100%) results in better classification performance than the MLP model (94%).Keywords: diabetic retinopathy, discrete wavelet transform, multi-layer perceptron, radial basis function, video-oculography (VOG)
Procedia PDF Downloads 2595977 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method
Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari
Abstract:
The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.Keywords: optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization
Procedia PDF Downloads 3675976 Performance Comparison of Resource Allocation without Feedback in Wireless Body Area Networks by Various Pseudo Orthogonal Sequences
Authors: Ojin Kwon, Yong-Jin Yoon, Liu Xin, Zhang Hongbao
Abstract:
Wireless Body Area Network (WBAN) is a short-range wireless communication around human body for various applications such as wearable devices, entertainment, military, and especially medical devices. WBAN attracts the attention of continuous health monitoring system including diagnostic procedure, early detection of abnormal conditions, and prevention of emergency situations. Compared to cellular network, WBAN system is more difficult to control inter- and inner-cell interference due to the limited power, limited calculation capability, mobility of patient, and non-cooperation among WBANs. In this paper, we compare the performance of resource allocation scheme based on several Pseudo Orthogonal Codewords (POCs) to mitigate inter-WBAN interference. Previously, the POCs are widely exploited for a protocol sequence and optical orthogonal code. Each POCs have different properties of auto- and cross-correlation and spectral efficiency according to its construction of POCs. To identify different WBANs, several different pseudo orthogonal patterns based on POCs exploits for resource allocation of WBANs. By simulating these pseudo orthogonal resource allocations of WBANs on MATLAB, we obtain the performance of WBANs according to different POCs and can analyze and evaluate the suitability of POCs for the resource allocation in the WBANs system.Keywords: wireless body area network, body sensor network, resource allocation without feedback, interference mitigation, pseudo orthogonal pattern
Procedia PDF Downloads 3535975 Effect of Synbiotics on Rats' Intestinal Microbiota
Authors: Da Yoon Yu, Jeong A. Kim, In Sung Kim, Yeon Hee Hong, Jae Young Kim, Sang Suk Lee, Sung Chan Kim, So Hui Choe, In Soon Choi, Kwang Keun Cho
Abstract:
The present study was conducted to identify the effects of synbiotics composed of lactic acid (LA) bacteria (LAB) and sea tangle on rat’s intestinal microorganisms and anti-obesity effects. The experiment was conducted for six weeks using an 8-week old male rat as experiment animals and the experimental design was to use six treatments groups of 4 repetitions using three mice per repetition. The treatment groups were organized into a normal fat diet control (NFC), a high fat (HF) diet control (HFC), a prebiotic 0% treatment (HF+LA+sea tangle 0%, ST0), a prebiotic 5% treatment (HF+LA+sea tangle 5%, ST5), a prebiotic 10% treatment (HF+LA+sea tangle 10%, ST10), and a prebiotic 15% treatment group (HF+LA+sea tangle 15%, ST15) to conduct experiments with various levels of prebiotics. According to the results of the experiment, the NFC group showed the highest daily weight gain (22.34g) and the ST0 group showed the lowest daily weight gain (19.41g). However, weight gains during the entire experimental period were the highest in the HFC group (475.73g) and the lowest in the ST0 group (454.23g). Feed efficiency was the highest in the HFC group (0.20). Treatment with synbiotics composed of LAB and sea tangle suppressed weight increases due to HF diet and reduced feed efficiency. Intestinal microorganisms were identified through pyrosequncing and according to the results, Firmicutes phylum (approximately 60%) and Bacteroidetes phylum (approximately 30%) accounted for approximately 90% or more of intestinal microorganisms in all of the treatment groups indicating these bacteria are dominating in the intestines. Firmicutes that is related to weight increases accounted for 64.96% of microorganisms in the NFC group, 75.32% in the HFC group, 59.51% in the ST0 group, 61.29% in the ST5 group, 49.91% in the ST10 group, and 39.65% in the ST15 group. Therefore, Firmicutes showed the highest share the HFC group that showed high weight gains and the lowest share in the group treated with mixed synbiotics composed of LAB and sea tangle. Bacteroidetes that is related to weight gain inhibition accounted for 32.12% of microorganisms in the NFC group, and HFC group 21.57%, ST0 group 37.66%, ST5 group 34.92%, ST10 group 44.46%, and ST15 group 53.22%. Therefore, the share of Bacteroidetes was the lowest in the HFC group with no addition of synbiotics and increased along with the level of treatment with synbiotics. Changes in blood components were not significantly different among the groups and SCFA yields were shown to be higher in groups treated with synbiotics than in groups not added with synbiotics. Through the present study, it was shown that the supply of synbiotics composed of LAB and sea tangle increased feed intake but led to weight losses and that the intake of synbiotics composed of LAB and sea tangle had anti-obesity effects due to decreases in Firmicutes which are microorganisms related to weight gains and increases in Bacteroidetes which are microorganisms related to weight losses. Therefore, synbiotics composed of LAB and sea tangle are considered to have the effect to prevent metabolic disorders in the rat.Keywords: bacteroidetes, firmicutes, intestinal microbiota, lactic acid, sea tangle, synbiotics
Procedia PDF Downloads 4005974 Steel Bridge Coating Inspection Using Image Processing with Neural Network Approach
Authors: Ahmed Elbeheri, Tarek Zayed
Abstract:
Steel bridges deterioration has been one of the problems in North America for the last years. Steel bridges deterioration mainly attributed to the difficult weather conditions. Steel bridges suffer fatigue cracks and corrosion, which necessitate immediate inspection. Visual inspection is the most common technique for steel bridges inspection, but it depends on the inspector experience, conditions, and work environment. So many Non-destructive Evaluation (NDE) models have been developed use Non-destructive technologies to be more accurate, reliable and non-human dependent. Non-destructive techniques such as The Eddy Current Method, The Radiographic Method (RT), Ultra-Sonic Method (UT), Infra-red thermography and Laser technology have been used. Digital Image processing will be used for Corrosion detection as an Alternative for visual inspection. Different models had used grey-level and colored digital image for processing. However, color image proved to be better as it uses the color of the rust to distinguish it from the different backgrounds. The detection of the rust is an important process as it’s the first warning for the corrosion and a sign of coating erosion. To decide which is the steel element to be repainted and how urgent it is the percentage of rust should be calculated. In this paper, an image processing approach will be developed to detect corrosion and its severity. Two models were developed 1st to detect rust and 2nd to detect rust percentage.Keywords: steel bridge, bridge inspection, steel corrosion, image processing
Procedia PDF Downloads 3065973 Construction of the Large Scale Biological Networks from Microarrays
Authors: Fadhl Alakwaa
Abstract:
One of the sustainable goals of the system biology is understanding gene-gene interactions. Hence, gene regulatory networks (GRN) need to be constructed for understanding the disease ontology and to reduce the cost of drug development. To construct gene regulatory from gene expression we need to overcome many challenges such as data denoising and dimensionality. In this paper, we develop an integrated system to reduce data dimension and remove the noise. The generated network from our system was validated via available interaction databases and was compared to previous methods. The result revealed the performance of our proposed method.Keywords: gene regulatory network, biclustering, denoising, system biology
Procedia PDF Downloads 2395972 Variation in pH Values and Tenderness of Meat of Cattle Fed Different Levels of Lipids
Authors: Erico Da Silva Lima, Tiago Neves Pereira Valente, Roberto De Oliveira Roça
Abstract:
Introduction: Over the last few years the market has increased its demand for high quality meat. Based on this premise some producers have continuously improved their efficiency in breeding beef cattle with the purpose to support this demand. It is well recognized that final quality of beef is intimately linked to animal’s diet. The key objective of this study is to evaluate the influence of feeding animals with cottonseed and its lipids and the final results in terms of pH and shear forces of the meat. Materials and Methods: The study was carried out in the Chapéu de Couro Farm in Aguaí/SP, Brazil. A group of 39 uncastrated Nellore cattle. Mean age of the animals was 36 months and initial mean live weight was 494.1 ± 10.1. Animals were randomly assigned to one of three treatments, based on dry matter: feed with control diet 2.50% cottonseed, feed with 11.50% cottonseed, and feed with 3.13% cottonseed added of 1.77% protected lipid. Forage:concentrate ratio was 50:50 on a dry matter basis. Sugar cane chopped was used as forage. After slaughter, carcasses were identified and divided into two halves that were kept in a cold chamber for 24 h at 2°C. Using pH meter was determined post-mortem pH in Longissimus thoracis muscle between the 12th and 13th rib of the left half carcass. After, part of each animal was removed, and divided in three samples (steaks). Steaks were 2.5 cm thick and were identified and stored individually in plastic bags under vacuum. Samples were frozen in a freezer at -18°C. The same samples cooked were refrigerated by 12 h the 4°C, and then cut into cylinders 1.10 Øcm with the support of a drill press avoiding fats and nerves. Shear force was calculated in these samples cut into cylinders through the Brookfield texture CT3 Texture Analyzer 25 k equipped with a set of blade Warner-Bratzler. Results and Discussion: No differences (P > 0.05) in pH 24 h after slaughter were observed in the meat of Nellore cattle fed different sources of fat, and mean value for this variable was 5.59. However, for the shear force differences (P < 0.05) were founded. For diet with 2,50% cottonseed the lowest value found 5.10 (kg) while for the treatment with 11.50% cottonseed the great value found was 6.30 (kg). High shear force values mean greater texture of meat that indicates less tenderness. The texture of the meat can be influenced by age, weight to the slaughter of animals. For cattle breed Nellore Bos taurus indicus more high value of shear force. Conclusions: The add the cottonseed or protected lipid in diet is not affected pH values in meat. The whole cottonseed does not contribute to the improvement of tenderness of the meat. Acknowledgments: IFGoiano, FAPEG and CNPq (Brazil).Keywords: beef quality, cottonseed, protected fat, shear force
Procedia PDF Downloads 2285971 Orange Leaves and Rice Straw on Methane Emission and Milk Production in Murciano-Granadina Dairy Goat Diet
Authors: Tamara Romero, Manuel Romero-Huelva, Jose V. Segarra, Jose Castro, Carlos Fernandez
Abstract:
Many foods resulting from processing and manufacturing end up as waste, most of which is burned, dumped into landfills or used as compost, which leads to wasted resources, and environmental problems due to unsuitable disposal. Using residues of the crop and food processing industries to feed livestock has the advantage to obviating the need for costly waste management programs. The main residue generated in citrus cultivations and rice crop are pruning waste and rice straw, respectively. Within Spain, the Valencian Community is one of the world's oldest citrus and rice production areas. The objective of this experiment found out the effects of including orange leaves and rice straw as ingredients in the concentrate diets of goats, on milk production and methane (CH₄) emissions. Ten Murciano-Granadina dairy goats (45 kg of body weight, on average) in mid-lactation were selected in a crossover design experiment, where each goat received two treatments in 2 periods. Both groups were fed with 1.7 kg pelleted mixed ration; one group (n= 5) was a control (C) and the other group (n= 5) used orange leaves and rice straw (OR). The forage was alfalfa hay, and it was the same for the two groups (1 kg of alfalfa was offered by goat and day). The diets employed to achieve the requirements during lactation period for caprine livestock. The goats were allocated to individual metabolism cages. After 14 days of adaptation, feed intake and milk yield were recorded daily over a 5 days period. Physico-chemical parameters and somatic cell count in milk samples were determined. Then, gas exchange measurements were recorded individually by an open-circuit indirect calorimetry system using a head box. The data were analyzed by mixed model with diet and digestibility as fixed effect and goat as random effect. No differences were found for dry matter intake (2.23 kg/d, on average). Higher milk yield was found for C diet than OR (2.3 vs. 2.1 kg/goat and day, respectively) and, greater milk fat content was observed for OR than C (6.5 vs. 5.5%, respectively). The cheese extract was also greater in OR than C (10.7 vs. 9.6%). Goats fed OR diet produced significantly fewer CH₄ emissions than C diet (27 vs. 30 g/d, respectively). These preliminary results (LIFE Project LOWCARBON FEED LIFE/CCM/ES/000088) suggested that the use of these waste by-products was effective in reducing CH₄ emission without detrimental effect on milk yield.Keywords: agricultural waste, goat, milk production, methane emission
Procedia PDF Downloads 1485970 A Decision Support System to Detect the Lumbar Disc Disease on the Basis of Clinical MRI
Authors: Yavuz Unal, Kemal Polat, H. Erdinc Kocer
Abstract:
In this study, a decision support system comprising three stages has been proposed to detect the disc abnormalities of the lumbar region. In the first stage named the feature extraction, T2-weighted sagittal and axial Magnetic Resonance Images (MRI) were taken from 55 people and then 27 appearance and shape features were acquired from both sagittal and transverse images. In the second stage named the feature weighting process, k-means clustering based feature weighting (KMCBFW) proposed by Gunes et al. Finally, in the third stage named the classification process, the classifier algorithms including multi-layer perceptron (MLP- neural network), support vector machine (SVM), Naïve Bayes, and decision tree have been used to classify whether the subject has lumbar disc or not. In order to test the performance of the proposed method, the classification accuracy (%), sensitivity, specificity, precision, recall, f-measure, kappa value, and computation times have been used. The best hybrid model is the combination of k-means clustering based feature weighting and decision tree in the detecting of lumbar disc disease based on both sagittal and axial MR images.Keywords: lumbar disc abnormality, lumbar MRI, lumbar spine, hybrid models, hybrid features, k-means clustering based feature weighting
Procedia PDF Downloads 5205969 Smart Water Main Inspection and Condition Assessment Using a Systematic Approach for Pipes Selection
Authors: Reza Moslemi, Sebastien Perrier
Abstract:
Water infrastructure deterioration can result in increased operational costs owing to increased repair needs and non-revenue water and consequently cause a reduced level of service and customer service satisfaction. Various water main condition assessment technologies have been introduced to the market in order to evaluate the level of pipe deterioration and to develop appropriate asset management and pipe renewal plans. One of the challenges for any condition assessment and inspection program is to determine the percentage of the water network and the combination of pipe segments to be inspected in order to obtain a meaningful representation of the status of the entire water network with a desirable level of accuracy. Traditionally, condition assessment has been conducted by selecting pipes based on age or location. However, this may not necessarily offer the best approach, and it is believed that by using a smart sampling methodology, a better and more reliable estimate of the condition of a water network can be achieved. This research investigates three different sampling methodologies, including random, stratified, and systematic. It is demonstrated that selecting pipes based on the proposed clustering and sampling scheme can considerably improve the ability of the inspected subset to represent the condition of a wider network. With a smart sampling methodology, a smaller data sample can provide the same insight as a larger sample. This methodology offers increased efficiency and cost savings for condition assessment processes and projects.Keywords: condition assessment, pipe degradation, sampling, water main
Procedia PDF Downloads 1505968 Depth of Penetration and Nature of Interferential Current in Cutaneous, Subcutaneous and Muscle Tissues
Authors: A. Beatti, L. Chipchase, A. Rayner, T. Souvlis
Abstract:
The aims of this study were to investigate the depth of interferential current (IFC) penetration through soft tissue and to investigate the area over which IFC spreads during clinical application. Premodulated IFC and ‘true’ IFC at beat frequencies of 4, 40 and 90Hz were applied via four electrodes to the distal medial thigh of 15 healthy subjects. The current was measured via three Teflon coated fine needle electrodes that were inserted into the superficial layer of skin, then into the subcutaneous tissue (≈1 cm deep) and then into muscle tissue (≈2 cm deep). The needle electrodes were placed in the middle of the four IFC electrodes, between two channels and outside the four electrodes. Readings were taken at each tissue depth from each electrode during each treatment frequency then digitized and stored for analysis. All voltages were greater at all depths and locations than baseline (p < 0.01) and voltages decreased with depth (P=0.039). Lower voltages of all currents were recorded in the middle of the four electrodes with the highest voltage being recorded outside the four electrodes in all depths (P=0.000).For each frequency of ‘true’ IFC, the voltage was higher in the superficial layer outside the electrodes (P ≤ 0.01).Premodulated had higher voltages along the line of one circuit (P ≤ 0.01). Clinically, IFC appears to pass through skin layers to depth and is more efficient than premodulated IFC when targeting muscle tissue.Keywords: electrotherapy, interferential current, interferential therapy, medium frequency current
Procedia PDF Downloads 346