Search results for: wireless sensors networks
2285 Increasing Power Transfer Capacity of Distribution Networks Using Direct Current Feeders
Authors: Akim Borbuev, Francisco de León
Abstract:
Economic and population growth in densely-populated urban areas introduce major challenges to distribution system operators, planers, and designers. To supply added loads, utilities are frequently forced to invest in new distribution feeders. However, this is becoming increasingly more challenging due to space limitations and rising installation costs in urban settings. This paper proposes the conversion of critical alternating current (ac) distribution feeders into direct current (dc) feeders to increase the power transfer capacity by a factor as high as four. Current trends suggest that the return of dc transmission, distribution, and utilization are inevitable. Since a total system-level transformation to dc operation is not possible in a short period of time due to the needed huge investments and utility unreadiness, this paper recommends that feeders that are expected to exceed their limits in near future are converted to dc. The increase in power transfer capacity is achieved through several key differences between ac and dc power transmission systems. First, it is shown that underground cables can be operated at higher dc voltage than the ac voltage for the same dielectric stress in the insulation. Second, cable sheath losses, due to induced voltages yielding circulation currents, that can be as high as phase conductor losses under ac operation, are not present under dc. Finally, skin and proximity effects in conductors and sheaths do not exist in dc cables. The paper demonstrates that in addition to the increased power transfer capacity utilities substituting ac feeders by dc feeders could benefit from significant lower costs and reduced losses. Installing dc feeders is less expensive than installing new ac feeders even when new trenches are not needed. Case studies using the IEEE 342-Node Low Voltage Networked Test System quantify the technical and economic benefits of dc feeders.Keywords: DC power systems, distribution feeders, distribution networks, power transfer capacity
Procedia PDF Downloads 1282284 Energy Harvesting with Zinc Oxide Based Nanogenerator: Design and Simulation Using Comsol-4.3 Software
Authors: Akanksha Rohit, Ujjwala Godavarthi, Anshua Mukherjee
Abstract:
Nanotechnology is one of the promising sustainable solutions in the era of miniaturization due to its multidisciplinary nature. The most interesting aspect about nanotechnology is its wide ranging applications from electronics to military and biomedical. It tries to connect individuals more closely to the environment. In this paper, concept of parasitic energy harvesting is used in designing nanogenerators using COMSOL 4.3 software. The output of the nanogenerator is optimized using following constraints: ease of availability of the material, fabrication process and cost of the material. The nanogenerator is optimized using ZnO based nanowires, PMMA as insulator and aluminum and silicon as metal electrodes. The energy harvested from the model can be used to power nanobots, several other biomedical sensors and eventually to replace batteries. Thus, advancements in this field can be very challenging but it is the future of the nano era.Keywords: zinc oxide, piezoelectric, PMMA, parasitic energy harvesting, renewable energy engineering
Procedia PDF Downloads 3642283 An Exploration of Cyberspace Security, Strategy for a New Era
Authors: Laxmi R. Kasaraneni
Abstract:
The Internet connects all the networks, including the nation’s critical infrastructure that are used extensively by not only a nation’s government and military to protect sensitive information and execute missions, but also the primary infrastructure that provides services that enable modern conveniences such as education, potable water, electricity, natural gas, and financial transactions. It has become the central nervous system for the government, the citizens, and the industries. When it is attacked, the effects can ripple far and wide impacts not only to citizens’ well-being but nation’s economy, civil infrastructure, and national security. As such, these critical services may be targeted by malicious hackers during cyber warfare, it is imperative to not only protect them and mitigate any immediate or potential threats, but to also understand the current or potential impacts beyond the IT networks or the organization. The Nation’s IT infrastructure which is now vital for communication, commerce, and control of our physical infrastructure, is highly vulnerable to attack. While existing technologies can address some vulnerabilities, fundamentally new architectures and technologies are needed to address the larger structural insecurities of an infrastructure developed in a more trusting time when mass cyber attacks were not foreseen. This research is intended to improve the core functions of the Internet and critical-sector information systems by providing a clear path to create a safe, secure, and resilient cyber environment that help stakeholders at all levels of government, and the private sector work together to develop the cybersecurity capabilities that are key to our economy, national security, and public health and safety. This research paper also emphasizes the present and future cyber security threats, the capabilities and goals of cyber attackers, a strategic concept and steps to implement cybersecurity for maximum effectiveness, enabling technologies, some strategic assumptions and critical challenges, and the future of cyberspace.Keywords: critical challenges, critical infrastructure, cyber security, enabling technologies, national security
Procedia PDF Downloads 2942282 From Homogeneous to Phase Separated UV-Cured Interpenetrating Polymer Networks: Influence of the System Composition on Properties and Microstructure
Authors: Caroline Rocco, Feyza Karasu, Céline Croutxé-Barghorn, Xavier Allonas, Maxime Lecompère, Gérard Riess, Yujing Zhang, Catarina Esteves, Leendert van der Ven, Rolf van Benthem Gijsbertus de With
Abstract:
Acrylates are widely used in UV-curing technology. Their high reactivity can, however, limit their conversion due to early vitrification. In addition, the free radical photopolymerization is known to be sensitive to oxygen inhibition leading to tacky surfaces. Although epoxides can lead to full polymerization, they are sensitive to humidity and exhibit low polymerization rate. To overcome the intrinsic limitations of both classes of monomers, Interpenetrating Polymer Networks (IPNs) can be synthesized. They consist of at least two cross linked polymers which are permanently entangled. They can be achieved under thermal and/or light induced polymerization in one or two steps approach. IPNs can display homogeneous to heterogeneous morphologies with various degrees of phase separation strongly linked to the monomer miscibility and also synthesis parameters. In this presentation, we synthesize UV-cured methacrylate - epoxide based IPNs with different chemical compositions in order to get a better understanding of their formation and phase separation. Miscibility before and during the photopolymerization, reaction kinetics, as well as mechanical properties and morphology have been investigated. The key parameters controlling the morphology and the phase separation, namely monomer miscibility and synthesis parameters have been identified. By monitoring the stiffness changes on the film surface, atomic force acoustic microscopy (AFAM) gave, in conjunction with polymerization kinetic profiles and thermomechanical properties, explanations and corroborated the miscibility predictions. When varying the methacrylate / epoxide ratio, it was possible to move from a miscible and highly-interpenetrated IPN to a totally immiscible and phase-separated one.Keywords: investigation of properties and morphology, kinetics, phase separation, UV-cured IPNs
Procedia PDF Downloads 3672281 DNA and DNA-Complexes Modified with Electromagnetic Radiation
Authors: Ewelina Nowak, Anna Wisla-Swider, Krzysztof Danel
Abstract:
Aqueous suspensions of DNA were illuminated with linearly polarized visible light and ultraviolet for 5, 15, 20 and 40 h. In order to check the nature of modification, DNA interactions were characterized by FTIR spectroscopy. For each illuminated sample, weight average molecular weight and hydrodynamic radius were measured by high pressure size exclusion chromatography. Resulting optical changes for illuminated DNA were investigated using UV-Vis spectra and photoluminescent. Optical properties show potential application in sensors based on modified DNA. Then selected DNA-surfactant complexes were illuminated with electromagnetic radiation for 5h. Molecular structure, optical characteristic were examinated for obtained complexes. Illumination led to changes of complexes physicochemical properties as compared with native DNA. Observed changes were induced by rearrangement of the molecular structure of DNA chains.Keywords: biopolymers, deoxyribonucleic acid, ionic liquids, linearly polarized visible light, ultraviolet
Procedia PDF Downloads 2102280 Effects of Incident Angle and Distance on Visible Light Communication
Authors: Taegyoo Woo, Jong Kang Park, Jong Tae Kim
Abstract:
Visible Light Communication (VLC) provides wireless communication features in illumination systems. One of the key applications is to recognize the user location by indoor illuminators such as light emitting diodes. For localization of individual receivers in these systems, we usually assume that receivers and transmitters are placed in parallel. However, it is difficult to satisfy this assumption because the receivers move randomly in real case. It is necessary to analyze the case when transmitter is not placed perfectly parallel to receiver. It is also important to identify changes on optical gain by the tilted angles and distances of them against the illuminators. In this paper, we simulate optical gain for various cases where the tilt of the receiver and the distance change. Then, we identified changing patterns of optical gains according to tilted angles of a receiver and distance. These results can help many VLC applications understand the extent of the location errors with regard to optical gains of the receivers and identify the root cause.Keywords: visible light communication, incident angle, optical gain, light emitting diode
Procedia PDF Downloads 3352279 Performance Comparison of Joint Diagonalization Structure (JDS) Method and Wideband MUSIC Method
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
We simulate an efficient multiple wideband and nonstationary source localization algorithm by exploiting both the non-stationarity of the signals and the array geometric information.This algorithm is based on joint diagonalization structure (JDS) of a set of short time power spectrum matrices at different time instants of each frequency bin. JDS can be used for quick and accurate multiple non-stationary source localization. The JDS algorithm is a one stage process i.e it directly searches the Direction of arrivals (DOAs) over the continuous location parameter space. The JDS method requires that the number of sensors is not less than the number of sources. By observing the simulation results, one can conclude that the JDS method can localize two sources when their difference is not less than 7 degree but the Wideband MUSIC is able to localize two sources for difference of 18 degree.Keywords: joint diagonalization structure (JDS), wideband direction of arrival (DOA), wideband MUSIC
Procedia PDF Downloads 4682278 Development a Forecasting System and Reliable Sensors for River Bed Degradation and Bridge Pier Scouring
Authors: Fong-Zuo Lee, Jihn-Sung Lai, Yung-Bin Lin, Xiaoqin Liu, Kuo-Chun Chang, Zhi-Xian Yang, Wen-Dar Guo, Jian-Hao Hong
Abstract:
In recent years, climate change is a major factor to increase rainfall intensity and extreme rainfall frequency. The increased rainfall intensity and extreme rainfall frequency will increase the probability of flash flood with abundant sediment transport in a river basin. The floods caused by heavy rainfall may cause damages to the bridge, embankment, hydraulic works, and the other disasters. Therefore, the foundation scouring of bridge pier, embankment and spur dike caused by floods has been a severe problem in the worldwide. This severe problem has happened in many East Asian countries such as Taiwan and Japan because of these areas are suffered in typhoons, earthquakes, and flood events every year. Results from the complex interaction between fluid flow patterns caused by hydraulic works and the sediment transportation leading to the formation of river morphology, it is extremely difficult to develop a reliable and durable sensor to measure river bed degradation and bridge pier scouring. Therefore, an innovative scour monitoring sensor using vibration-based Micro-Electro Mechanical Systems (MEMS) was developed. This vibration-based MEMS sensor was packaged inside a stainless sphere with the proper protection of the full-filled resin, which can measure free vibration signals to detect scouring/deposition processes at the bridge pier. In addition, a friendly operational system includes rainfall runoff model, one-dimensional and two-dimensional numerical model, and the applicability of sediment transport equation and local scour formulas of bridge pier are included in this research. The friendly operational system carries out the simulation results of flood events that includes the elevation changes of river bed erosion near the specified bridge pier and the erosion depth around bridge piers. In addition, the system is developed with easy operation and integrated interface, the system can supplies users to calibrate and verify numerical model and display simulation results through the interface comparing to the scour monitoring sensors. To achieve the forecast of the erosion depth of river bed and main bridge pier in the study area, the system also connects the rainfall forecast data from Taiwan Typhoon and Flood Research Institute. The results can be provided available information for the management unit of river and bridge engineering in advance.Keywords: flash flood, river bed degradation, bridge pier scouring, a friendly operational system
Procedia PDF Downloads 1912277 Simple Multipath Compensation for Frequency Modulated Signals: A Case of Radio Frequency vs. Quadrature Baseband
Authors: Lusungu Ndovi
Abstract:
Radio propagation from point-to-point is affected by the physical channel in many ways. A signal arriving at a destination travels through a number of different paths which are referred to as multi-paths. Research in this area of wireless communications has progressed well over the years with the research taking different angles of focus. By this is meant that some researchers focus on ways of reducing or eluding Multipath effects whilst others focus on ways of mitigating the effects of Multipath through compensation schemes. Baseband processing is seen as one field of signal processing that is cardinal to the advancement of software-defined radio technology. This has led to wide research into the carrying out certain algorithms at baseband. This paper considers compensating for Multipath for Frequency Modulated signals. The compensation process is carried out at Radio frequency (RF) and at Quadrature baseband (QBB) and the results are compared. Simulations are carried out using MatLab so as to show the benefits of working at lower QBB frequencies than at RF.Keywords: quadrature baseband, qadio frequency, qultipath compensation, frequency qodulation, signal processing
Procedia PDF Downloads 4812276 BER of the Leaky Feeder under Rayleigh Fading Multichannel Reception with Imperfect Phase Estimation
Authors: Hasan Farahneh, Xavier Fernando
Abstract:
Leaky Feeder (LF) has been a proven technology for many decades and its promises broadband wireless access in short range but being overlooked until now. The LF is a natural MIMO transceiver ideal for micro and pico cells. In this work, the LF is considered as a linear antenna array MultiInput-Single-Output (MISO) and derive the average bit error rate (BER) in Rayleigh fading channel considering ideal and independent paths (iid) which consider there is no correlation and mutual coupling between transmit antennas (slots) or receiver antenna considering QPSK modulation with imperfect phase estimation. We consider maximal ratio transmission (MRT) at the transmit end and maximal ratio combining (MRC) at the receiving end. Analytical expressions are derived for the BER with radiating cable transmitters. The effects of slot spacing and carrier frequency on the BER are also studied. Numerical evaluations show the radiating cable transmitter offer much lower BER than a single antenna transmitter with same SNR.Keywords: leaky feeder, BER, QPSK, rayleigh fading, channel gain, phase mismatch
Procedia PDF Downloads 3812275 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks
Authors: Muneeb Ullah, Daishihan, Xiadong Young
Abstract:
Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.Keywords: classification, deep learning, medical images, CXR, GAN.
Procedia PDF Downloads 962274 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs
Authors: Anika Chebrolu
Abstract:
Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.Keywords: drug design, multitargeticity, de-novo, reinforcement learning
Procedia PDF Downloads 972273 Miniaturized and Compact Monopole Corner Antenna with a Periodic Slot Truncated and T-Inverted Stub-Tuning for Ultra Wideband Applications
Authors: R. Dakir, J. Zbitou, Ahmed Mouhsen, A. Errkik, A. Tajmouati, M. Latrach
Abstract:
The design and analysis of a new compact and miniaturized monopole antenna structure for ultra wideband (UWB) wireless applications are presented and suggested in this paper. The proposed antenna structure is based on corner radiator patch with T-shaped slot and fed by mictostrip feed line with a partial ground plane combined a periodic rectangular slot and inverted T-stub tuning to increase the bandwidth. The design parameters and the performance of the suggested antenna are investigated by using 'CST Microwave Studio' and Advanced Design System. The final prototype of the proposed antenna operates from 3GHZ to 25GHz, corresponding to wide input impedance bandwidth around (157.14%) with a size of 16*24mm2 and can be easily integrated with radio-frequency or microwave circuits with low cost manufacturing. Details of the UWB antenna design and both simulated and measured results are described and discussed.Keywords: UWB, T-shaped slots, improvement, bandwidth, stub tuning
Procedia PDF Downloads 2952272 Infrastructure Development – Stages in Development
Authors: Seppo Sirkemaa
Abstract:
Information systems infrastructure is the basis of business systems and processes in the company. It should be a reliable platform for business processes and activities but also have the flexibility to change business needs. The development of an infrastructure that is robust, reliable, and flexible is a challenge. Understanding technological capabilities and business needs is a key element in the development of successful information systems infrastructure.Keywords: development, information technology, networks, technology
Procedia PDF Downloads 1182271 [Keynote Talk]: Evidence Fusion in Decision Making
Authors: Mohammad Abdullah-Al-Wadud
Abstract:
In the current era of automation and artificial intelligence, different systems have been increasingly keeping on depending on decision-making capabilities of machines. Such systems/applications may range from simple classifiers to sophisticated surveillance systems based on traditional sensors and related equipment which are becoming more common in the internet of things (IoT) paradigm. However, the available data for such problems are usually imprecise and incomplete, which leads to uncertainty in decisions made based on traditional probability-based classifiers. This requires a robust fusion framework to combine the available information sources with some degree of certainty. The theory of evidence can provide with such a method for combining evidence from different (may be unreliable) sources/observers. This talk will address the employment of the Dempster-Shafer Theory of evidence in some practical applications.Keywords: decision making, dempster-shafer theory, evidence fusion, incomplete data, uncertainty
Procedia PDF Downloads 4252270 Effects of Earthquake Induced Debris to Pedestrian and Community Street Network Resilience
Authors: Al-Amin, Huanjun Jiang, Anayat Ali
Abstract:
Reinforced concrete frames (RC), especially Ordinary RC frames, are prone to structural failures/collapse during seismic events, leading to a large proportion of debris from the structures, which obstructs adjacent areas, including streets. These blocked areas severely impede post-earthquake resilience. This study uses computational simulation (FEM) to investigate the amount of debris generated by the seismic collapse of an ordinary reinforced concrete moment frame building and its effects on the adjacent pedestrian and road network. A three-story ordinary reinforced concrete frame building, primarily designed for gravity load and earthquake resistance, was selected for analysis. Sixteen different ground motions were applied and scaled up until the total collapse of the tested building to evaluate the failure mode under various seismic events. Four types of collapse direction were identified through the analysis, namely aligned (positive and negative) and skewed (positive and negative), with aligned collapse being more predominant than skewed cases. The amount and distribution of debris around the collapsed building were assessed to investigate the interaction between collapsed buildings and adjacent street networks. An interaction was established between a building that collapsed in an aligned direction and the adjacent pedestrian walkway and narrow street located in an unplanned old city. The FEM model was validated against an existing shaking table test. The presented results can be utilized to simulate the interdependency between the debris generated from the collapse of seismic-prone buildings and the resilience of street networks. These findings provide insights for better disaster planning and resilient infrastructure development in earthquake-prone regions.Keywords: building collapse, earthquake-induced debris, ORC moment resisting frame, street network
Procedia PDF Downloads 852269 Design and Implementation of a Control System for a Walking Robot with Color Sensing and Line following Using PIC and ATMEL Microcontrollers
Authors: Ibraheem K. Ibraheem
Abstract:
The aim of this research is to design and implement line-tracking mobile robot. The robot must follow a line drawn on the floor with different color, avoids hitting moving object like another moving robot or walking people and achieves color sensing. The control system reacts by controlling each of the motors to keep the tracking sensor over the middle of the line. Proximity sensors used to avoid hitting moving objects that may pass in front of the robot. The programs have been written using micro c instructions, then converted into PIC16F887 ATmega48/88/168 microcontrollers counterparts. Practical simulations show that the walking robot accurately achieves line following action and exactly recognizes the colors and avoids any obstacle in front of it.Keywords: color sensing, H-bridge, line following, mobile robot, PIC microcontroller, obstacle avoidance, phototransistor
Procedia PDF Downloads 3982268 3 Phase Induction Motor Control Using Single Phase Input and GSM
Authors: Pooja S. Billade, Sanjay S. Chopade
Abstract:
This paper focuses on the design of three phase induction motor control using single phase input and GSM.The controller used in this work is a wireless speed control using a GSM technique that proves to be very efficient and reliable in applications.The most common principle is the constant V/Hz principle which requires that the magnitude and frequency of the voltage applied to the stator of a motor maintain a constant ratio. By doing this, the magnitude of the magnetic field in the stator is kept at an approximately constant level throughout the operating range. Thus, maximum constant torque producing capability is maintained. The energy that a switching power converter delivers to a motor is controlled by Pulse Width Modulated signals applied to the gates of the power transistors in H-bridge configuration. PWM signals are pulse trains with fixed frequency and magnitude and variable pulse width. When a PWM signal is applied to the gate of a power transistor, it causes the turn on and turns off intervals of the transistor to change from one PWM period.Keywords: index terms— PIC, GSM (global system for mobile), LCD (Liquid Crystal Display), IM (Induction Motor)
Procedia PDF Downloads 4482267 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network
Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy
Abstract:
The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence
Procedia PDF Downloads 1272266 Two Wheels Differential Type Odometry for Robot
Authors: Abhishek Jha, Manoj Kumar
Abstract:
This paper proposes a new type of two wheels differential type odometry to estimate the next position and orientation of mobile robots. The proposed odometry is composed for two independent wheels with respective encoders. The two wheels rotate independently, and the change is determined by the difference in the velocity of the two wheels. Angular velocities of the two wheels are measured by rotary encoders. A mathematical model is proposed for the mobile robots to precisely move towards the goal. Using measured values of the two encoders, the current displacement vector of a mobile robot is calculated by kinematics of the mathematical model. Using the displacement vector, the next position and orientation of the mobile robot are estimated by proposed odometry. Result of simulator experiment by the developed odometry is shown.Keywords: mobile robot, odometry, unicycle, differential type, encoders, infrared range sensors, kinematic model
Procedia PDF Downloads 4522265 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system
Procedia PDF Downloads 2722264 Neural Reshaping: The Plasticity of Human Brain and Artificial Intelligence in the Learning Process
Authors: Seyed-Ali Sadegh-Zadeh, Mahboobe Bahrami, Sahar Ahmadi, Seyed-Yaser Mousavi, Hamed Atashbar, Amir M. Hajiyavand
Abstract:
This paper presents an investigation into the concept of neural reshaping, which is crucial for achieving strong artificial intelligence through the development of AI algorithms with very high plasticity. By examining the plasticity of both human and artificial neural networks, the study uncovers groundbreaking insights into how these systems adapt to new experiences and situations, ultimately highlighting the potential for creating advanced AI systems that closely mimic human intelligence. The uniqueness of this paper lies in its comprehensive analysis of the neural reshaping process in both human and artificial intelligence systems. This comparative approach enables a deeper understanding of the fundamental principles of neural plasticity, thus shedding light on the limitations and untapped potential of both human and AI learning capabilities. By emphasizing the importance of neural reshaping in the quest for strong AI, the study underscores the need for developing AI algorithms with exceptional adaptability and plasticity. The paper's findings have significant implications for the future of AI research and development. By identifying the core principles of neural reshaping, this research can guide the design of next-generation AI technologies that can enhance human and artificial intelligence alike. These advancements will be instrumental in creating a new era of AI systems with unparalleled capabilities, paving the way for improved decision-making, problem-solving, and overall cognitive performance. In conclusion, this paper makes a substantial contribution by investigating the concept of neural reshaping and its importance for achieving strong AI. Through its in-depth exploration of neural plasticity in both human and artificial neural networks, the study unveils vital insights that can inform the development of innovative AI technologies with high adaptability and potential for enhancing human and AI capabilities alike.Keywords: neural plasticity, brain adaptation, artificial intelligence, learning, cognitive reshaping
Procedia PDF Downloads 522263 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 1002262 The Effect of the Precursor Powder Size on the Electrical and Sensor Characteristics of Fully Stabilized Zirconia-Based Solid Electrolytes
Authors: Olga Yu Kurapova, Alexander V. Shorokhov, Vladimir G. Konakov
Abstract:
Nowadays, due to their exceptional anion conductivity at high temperatures cubic zirconia solid solutions, stabilized by rare-earth and alkaline-earth metal oxides, are widely used as a solid electrolyte (SE) materials in different electrochemical devices such as gas sensors, oxygen pumps, solid oxide fuel cells (SOFC), etc. Nowadays the intensive studies are carried out in a field of novel fully stabilized zirconia based SE development. The use of precursor powders for SE manufacturing allows predetermining the microstructure, electrical and sensor characteristics of zirconia based ceramics used as SE. Thus the goal of the present work was the investigation of the effect of precursor powder size on the electrical and sensor characteristics of fully stabilized zirconia-based solid electrolytes with compositions of 0,08Y2O3∙0,92ZrO2 (YSZ), 0,06Ce2O3∙ 0,06Y2O3∙0,88ZrO2 and 0,09Ce2O3∙0,06Y2O3-0,85ZrO2. The synthesis of precursors powders with different mean particle size was performed by sol-gel synthesis in the form of reversed co-precipitation from aqueous solutions. The cakes were washed until the neutral pH and pan-dried at 110 °С. Also, YSZ ceramics was obtained by conventional solid state synthesis including milling into a planetary mill. Then the powder was cold pressed into the pellets with a diameter of 7.2 and ~4 mm thickness at P ~16 kg/cm2 and then hydrostatically pressed. The pellets were annealed at 1600 °С for 2 hours. The phase composition of as-synthesized SE was investigated by X-Ray photoelectron spectroscopy ESCA (spectrometer ESCA-5400, PHI) X-ray diffraction analysis - XRD (Shimadzu XRD-6000). Following galvanic cell О2 (РО2(1)), Pt | SE | Pt, (РО2(2) = 0.21 atm) was used for SE sensor properties investigation. The value of РО2(1) was set by mixing of O2 and N2 in the defined proportions with the accuracy of 5%. The temperature was measured by Pt/Pt-10% Rh thermocouple, The cell electromotive force (EMF) measurement was carried out with ± 0.1 mV accuracy. During the operation at the constant temperature, reproducibility was better than 5 mV. Asymmetric potential measured for all SE appeared to be negligible. It was shown that the resistivity of YSZ ceramics decreases in about two times upon the mean agglomerates decrease from 200-250 to 40 nm. It is likely due to the both surface and bulk resistivity decrease in grains. So the overall decrease of grain size in ceramic SE results in the significant decrease of the total ceramics resistivity allowing sensor operation at lower temperatures. For the SE manufactured the estimation of oxygen ion transfer number tion was carried out in the range 600-800 °С. YSZ ceramics manufactured from powders with the mean particle size 40-140 nm, shows the highest values i.e. 0.97-0.98. SE manufactured from precursors with the mean particle size 40-140 nm shows higher sensor characteristic i.e. temperature and oxygen concentration EMF dependencies, EMF (ENernst - Ereal), tion, response time, then ceramics, manufactured by conventional solid state synthesis.Keywords: oxygen sensors, precursor powders, sol-gel synthesis, stabilized zirconia ceramics
Procedia PDF Downloads 2822261 Industrial Prototype for Hydrogen Separation and Purification: Graphene Based-Materials Application
Authors: Juan Alfredo Guevara Carrio, Swamy Toolahalli Thipperudra, Riddhi Naik Dharmeshbhai, Sergio Graniero Echeverrigaray, Jose Vitorio Emiliano, Antonio Helio Castro
Abstract:
In order to advance the hydrogen economy, several industrial sectors can potentially benefit from the trillions of stimulus spending for post-coronavirus. Blending hydrogen into natural gas pipeline networks has been proposed as a means of delivering it during the early market development phase, using separation and purification technologies downstream to extract the pure H₂ close to the point of end-use. This first step has been mentioned around the world as an opportunity to use existing infrastructures for immediate decarbonisation pathways. Among current technologies used to extract hydrogen from mixtures in pipelines or liquid carriers, membrane separation can achieve the highest selectivity. The most efficient approach for the separation of H₂ from other substances by membranes is offered from the research of 2D layered materials due to their exceptional physical and chemical properties. Graphene-based membranes, with their distribution of pore sizes in nanometers and angstrom range, have shown fundamental and economic advantages over other materials. Their combination with the structure of ceramic and geopolymeric materials enabled the synthesis of nanocomposites and the fabrication of membranes with long-term stability and robustness in a relevant range of physical and chemical conditions. Versatile separation modules have been developed for hydrogen separation, which adaptability allows their integration in industrial prototypes for applications in heavy transport, steel, and cement production, as well as small installations at end-user stations of pipeline networks. The developed membranes and prototypes are a practical contribution to the technological challenge of supply pure H₂ for the mentioned industries as well as hydrogen energy-based fuel cells.Keywords: graphene nano-composite membranes, hydrogen separation and purification, separation modules, indsutrial prototype
Procedia PDF Downloads 1592260 Green Synthesis of Copper Oxide and Cobalt Oxide Nanoparticles Using Spinacia Oleracea Leaf Extract
Authors: Yameen Ahmed, Jamshid Hussain, Farman Ullah, Sohaib Asif
Abstract:
The investigation aims at the synthesis of copper oxide and cobalt oxide nanoparticles using Spinacia oleracea leaf extract. These nanoparticles have many properties and applications. They possess antimicrobial catalytic properties and also they can be used in energy storage materials, gas sensors, etc. The Spinacia oleracea leaf extract behaves as a reducing agent in nanoparticle synthesis. The plant extract was first prepared and then treated with copper and cobalt salt solutions to get the precipitate. The salt solutions used for this purpose are copper sulfate pentahydrate (CuSO₄.5H₂O) and cobalt chloride hexahydrate (CoCl₂.6H₂O). The UV-Vis, XRD, EDX, and SEM techniques are used to find the optical, structural, and morphological properties of copper oxide and cobalt oxide nanoparticles. The UV absorption peaks are at 326 nm and 506 nm for copper oxide and cobalt oxide nanoparticles.Keywords: cobalt oxide, copper oxide, green synthesis, nanoparticles
Procedia PDF Downloads 2122259 Experimental Field for the Study of Soil-Atmosphere Interaction in Soft Soils
Authors: Andres Mejia-Ortiz, Catalina Lozada, German R. Santos, Rafael Angulo-Jaramillo, Bernardo Caicedo
Abstract:
The interaction between atmospheric variables and soil properties is a determining factor when evaluating the flow of water through the soil. This interaction situation directly determines the behavior of the soil and greatly influences the changes that occur in it. The atmospheric variations such as changes in the relative humidity, air temperature, wind velocity and precipitation, are the external variables that reflect a greater incidence in the changes that are generated in the subsoil, as a consequence of the water flow in descending and ascending conditions. These environmental variations have a major importance in the study of the soil because the conditions of humidity and temperature in the soil surface depend on them. In addition, these variations control the thickness of the unsaturated zone and the position of the water table with respect to the surface. However, understanding the relationship between the atmosphere and the soil is a somewhat complex aspect. This is mainly due to the difficulty involved in estimating the changes that occur in the soil from climate changes; since this is a coupled process where act processes of mass transfer and heat. In this research, an experimental field was implemented to study in-situ the interaction between the atmosphere and the soft soils of the city of Bogota, Colombia. The soil under study consists of a 60 cm layer composed of two silts of similar characteristics at the surface and a deep soft clay deposit located under the silky material. It should be noted that the vegetal layer and organic matter were removed to avoid the evapotranspiration phenomenon. Instrumentation was carried on in situ through a field disposal of many measuring devices such as soil moisture sensors, thermocouples, relative humidity sensors, wind velocity sensor, among others; which allow registering the variations of both the atmospheric variables and the properties of the soil. With the information collected through field monitoring, the water balances were made using the Hydrus-1D software to determine the flow conditions that developed in the soil during the study. Also, the moisture profile for different periods and time intervals was determined by the balance supplied by Hydrus 1D; this profile was validated by experimental measurements. As a boundary condition, the actual evaporation rate was included using the semi-empirical equations proposed by different authors. In this study, it was obtained for the rainy periods a descending flow that was governed by the infiltration capacity of the soil. On the other hand, during dry periods. An increase in the actual evaporation of the soil induces an upward flow of water, increasing suction due to the decrease in moisture content. Also, cracks were developed accelerating the evaporation process. This work concerns to the study of soil-atmosphere interaction through the experimental field and it is a very useful tool since it allows considering all the factors and parameters of the soil in its natural state and real values of the different environmental conditions.Keywords: field monitoring, soil-atmosphere, soft soils, soil-water balance
Procedia PDF Downloads 1372258 Impact of Hard Limited Clipping Crest Factor Reduction Technique on Bit Error Rate in OFDM Based Systems
Authors: Theodore Grosch, Felipe Koji Godinho Hoshino
Abstract:
In wireless communications, 3GPP LTE is one of the solutions to meet the greater transmission data rate demand. One issue inherent to this technology is the PAPR (Peak-to-Average Power Ratio) of OFDM (Orthogonal Frequency Division Multiplexing) modulation. This high PAPR affects the efficiency of power amplifiers. One approach to mitigate this effect is the Crest Factor Reduction (CFR) technique. In this work, we simulate the impact of Hard Limited Clipping Crest Factor Reduction technique on BER (Bit Error Rate) in OFDM based Systems. In general, the results showed that CFR has more effects on higher digital modulation schemes, as expected. More importantly, we show the worst-case degradation due to CFR on QPSK, 16QAM, and 64QAM signals in a linear system. For example, hard clipping of 9 dB results in a 2 dB increase in signal to noise energy at a 1% BER for 64-QAM modulation.Keywords: bit error rate, crest factor reduction, OFDM, physical layer simulation
Procedia PDF Downloads 3662257 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals
Authors: Christine F. Boos, Fernando M. Azevedo
Abstract:
Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing
Procedia PDF Downloads 5282256 Teaching Translation in Brazilian Universities: A Study about the Possible Impacts of Translators’ Comments on the Cyberspace about Translator Education
Authors: Erica Lima
Abstract:
The objective of this paper is to discuss relevant points about teaching translation in Brazilian universities and the possible impacts of blogs and social networks to translator education today. It is intended to analyze the curricula of Brazilian translation courses, contrasting them to information obtained from two social networking groups of great visibility in the area concerning essential characteristics to become a successful profession. Therefore, research has, as its main corpus, a few undergraduate translation programs’ syllabuses, as well as a few postings on social networks groups that specifically share professional opinions regarding the necessity for a translator to obtain a degree in translation to practice the profession. To a certain extent, such comments and their corresponding responses lead to the propagation of discourses which influence the ideas that aspiring translators and recent graduates end up having towards themselves and their undergraduate courses. The postings also show that many professionals do not have a clear position regarding the translator education; while refuting it, they also encourage “free” courses. It is thus observed that cyberspace constitutes, on the one hand, a place of mobilization of people in defense of similar ideas. However, on the other hand, it embodies a place of tension and conflict, in view of the fact that there are many participants and, as in any other situation of interlocution, disagreements may arise. From the postings, aspects related to professionalism were analyzed (including discussions about regulation), as well as questions about the classic dichotomies: theory/practice; art/technique; self-education/academic training. As partial result, the common interest regarding the valorization of the profession could be mentioned, although there is no consensus on the essential characteristics to be a good translator. It was also possible to observe that the set of socially constructed representations in the group reflects characteristics of the world situation of the translation courses (especially in some European countries and in the United States), which, in the first instance, does not accurately reflect the Brazilian idiosyncrasies of the area.Keywords: cyberspace, teaching translation, translator education, university
Procedia PDF Downloads 388