Search results for: looping pipe networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3080

Search results for: looping pipe networks

1730 The Effectiveness of Logotherapy in Alleviating Social Isolation for Visually Impaired Students

Authors: Mohamed M. Elsherbiny, Ahmed T. Helal Ibrahim

Abstract:

Social isolation is one of the common problems faced visual impaired students especially in new situations. It refers to lack of interactions with others (students, staff members, and others) and dissatisfaction of social networks with others. In addition, it means "a lack of quantity and quality of social contacts". The situation became more complicated if we know that visual impaired students at Sultan Qaboos University were in special schools for the blind completely away from any integration with regular student, which may lead to isolation for being with regular students for the first time. Because the researcher is an academic advisor for all blind students in the College of Arts and Social Sciences at Sultan Qaboos University, he has noted (from the regular meetings with them) some aspects of isolation and many complaints from staff which motivated the researcher to try to alleviate the problem. Logotherapy is an important therapy used in clinical social work with various problems to help children and young people who are facing problems related to the lack of meaning in their life. So, the aim of the therapy is to find meaning in life and to be satisfied with that life. The basic meaning for visual impaired students in this study is to provide opportunities to build relationships and friendships with others and help them to be satisfied about interactions with their networks. The study aimed to identify whether there is a relationship between the use of logotherapy and alleviating social isolation for visual impaired students. This study is considered one of the quasi-experimental studies, the researcher has used experimental method. The researcher used one design which is before and after experiment on two groups, one control (did not apply to the therapy) and experimental group which is applied to the therapy. About the study tools, social isolation scale (SIS) was used to assess the degree of isolation. The sample was (20) of the visually impaired students at the College of Arts and Social Sciences, Sultan Qaboos University. The results showed the effectiveness of logotherapy in alleviating isolation for students.

Keywords: social isolation, logotherapy, visually impaired, disability

Procedia PDF Downloads 361
1729 Dynamic EEG Desynchronization in Response to Vicarious Pain

Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy

Abstract:

The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.

Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition

Procedia PDF Downloads 271
1728 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network

Authors: P. Karthick, K. Mahesh

Abstract:

Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.

Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system

Procedia PDF Downloads 170
1727 Forecast Financial Bubbles: Multidimensional Phenomenon

Authors: Zouari Ezzeddine, Ghraieb Ikram

Abstract:

From the results of the academic literature which evokes the limitations of previous studies, this article shows the reasons for multidimensionality Prediction of financial bubbles. A new framework for modeling study predicting financial bubbles by linking a set of variable presented on several dimensions dictating its multidimensional character. It takes into account the preferences of financial actors. A multicriteria anticipation of the appearance of bubbles in international financial markets helps to fight against a possible crisis.

Keywords: classical measures, predictions, financial bubbles, multidimensional, artificial neural networks

Procedia PDF Downloads 555
1726 Effect of Reynolds Number and Concentration of Biopolymer (Gum Arabic) on Drag Reduction of Turbulent Flow in Circular Pipe

Authors: Kamaljit Singh Sokhal, Gangacharyulu Dasoraju, Vijaya Kumar Bulasara

Abstract:

Biopolymers are popular in many areas, like petrochemicals, food industry and agriculture due to their favorable properties like environment-friendly, availability, and cost. In this study, a biopolymer gum Arabic was used to find its effect on the pressure drop at various concentrations (100 ppm – 300 ppm) with various Reynolds numbers (10000 – 45000). A rheological study was also done by using the same concentrations to find the effect of the shear rate on the shear viscosity. Experiments were performed to find the effect of injection of gum Arabic directly near the boundary layer and to investigate its effect on the maximum possible drag reduction. Experiments were performed on a test section having i.d of 19.50 mm and length of 3045 mm. The polymer solution was injected from the top of the test section by using a peristaltic pump. The concentration of the polymer solution and the Reynolds number were used as parameters to get maximum possible drag reduction. Water was circulated through a centrifugal pump having a maximum 3000 rpm and the flow rate was measured by using rotameter. Results were validated by using Virk's maximum drag reduction asymptote. A maximum drag reduction of 62.15% was observed with the maximum concentration of gum Arabic, 300 ppm. The solution was circulated in the closed loop to find the effect of degradation of polymers with a number of cycles on the drag reduction percentage. It was observed that the injection of the polymer solution in the boundary layer was showing better results than premixed solutions.

Keywords: drag reduction, shear viscosity, gum arabic, injection point

Procedia PDF Downloads 127
1725 Older Consumer’s Willingness to Trust Social Media Advertising: A Case of Australian Social Media Users

Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant

Abstract:

Social media networks have become the hotbed for advertising activities due mainly to their increasing consumer/user base and, secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional media, such as broadcast media and print media, and, more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilized as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: (1) Gen Z/Millennials Reliability = 4.90/7 vs. Gen X/Boomers Reliability = 4.34/7; (2) Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and (3) Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioral intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users in an attempt to foster positive behavioral responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.

Keywords: social media advertising, trust, older consumers, internet studies

Procedia PDF Downloads 12
1724 Subway Stray Current Effects on Gas Pipelines in the City of Tehran

Authors: Mohammad Derakhshani, Saeed Reza Allahkarama, Michael Isakhani-Zakaria, Masoud Samadian, Hojjat Sharifi Rasaey

Abstract:

In order to investigate the effects of stray current from DC traction systems (subway) on cathodically protected gas pipelines, the subway and the gas network maps in the city of Tehran were superimposed and a comprehensive map was prepared. 213 intersections and about 100150 meters of parallel sections of gas pipelines were found with respect to the railway right of way which was specified for field measurements. The potential measurements data were logged for one hour in each test point. 24-hour potential monitoring was carried out in selected test points as well. Results showed that dynamic stray current from subway on pipeline potential appears as fluctuations in its static potential that is visible in the diagrams during night periods. These fluctuations can cause the pipeline potential to exit the safe zone and lead to corrosion or overprotection. In this study, a maximum potential shift of 100 mv in the pipe-to-soil potential was considered as a criterion for dynamic stray current effective presence. Results showed that a potential fluctuation range between 100 mV to 3 V exists in measured points on pipelines which exceeds the proposed criterion and needs to be investigated. Corrosion rates influenced by stray currents were calculated using coupons. Results showed that coupon linked to the pipeline in one of the locations at region 1 of the city of Tehran has a corrosion rate of 4.2 mpy (with cathodic protection and under influence of stray currents) which is about 1.5 times more than free corrosion rate of 2.6 mpy.

Keywords: stray current, DC traction, subway, buried Pipelines, cathodic protection list

Procedia PDF Downloads 804
1723 Monitoring Synthesis of Biodiesel through Online Density Measurements

Authors: Arnaldo G. de Oliveira, Jr, Matthieu Tubino

Abstract:

The transesterification process of triglycerides with alcohols that occurs during the biodiesel synthesis causes continuous changes in several physical properties of the reaction mixture, such as refractive index, viscosity and density. Amongst them, density can be an useful parameter to monitor the reaction, in order to predict the composition of the reacting mixture and to verify the conversion of the oil into biodiesel. In this context, a system was constructed in order to continuously determine changes in the density of the reacting mixture containing soybean oil, methanol and sodium methoxide (30 % w/w solution in methanol), stirred at 620 rpm at room temperature (about 27 °C). A polyethylene pipe network connected to a peristaltic pump was used in order to collect the mixture and pump it through a coil fixed on the plate of an analytical balance. The collected mass values were used to trace a curve correlating the mass of the system to the reaction time. The density variation profile versus the time clearly shows three different steps: 1) the dispersion of methanol in oil causes a decrease in the system mass due to the lower alcohol density followed by stabilization; 2) the addition of the catalyst (sodium methoxide) causes a larger decrease in mass compared to the first step (dispersion of methanol in oil) because of the oil conversion into biodiesel; 3) the final stabilization, denoting the end of the reaction. This density variation profile provides information that was used to predict the composition of the mixture over the time and the reaction rate. The precise knowledge of the duration of the synthesis means saving time and resources on a scale production system. This kind of monitoring provides several interesting features such as continuous measurements without collecting aliquots.

Keywords: biodiesel, density measurements, online continuous monitoring, synthesis

Procedia PDF Downloads 564
1722 An Analysis of Twitter Use of Slow Food Movement in the Context of Online Activism

Authors: Kubra Sultan Yuzuncuyil, Aytekin İsman, Berkay Bulus

Abstract:

With the developments of information and communication technologies, the forms of molding public opinion have changed. In the presence of Internet, the notion of activism has been endowed with digital codes. Activists have engaged the use of Internet into their campaigns and the process of creating collective identity. Activist movements have been incorporating the relevance of new communication technologies for their goals and opposition. Creating and managing activism through Internet is called Online Activism. In this main, Slow Food Movement which was emerged within the philosophy of defending regional, fair and sustainable food has been engaging Internet into their activist campaign. This movement supports the idea that a new food system which allows strong connections between plate and planet is possible. In order to make their voices heard, it has utilized social networks and develop particular skills in the framework online activism. This study analyzes online activist skills of Slow Food Movement (SFM) develop and attempts to measure its effectiveness. To achieve this aim, it adopts the model proposed by Sivitandies and Shah and conduct both qualitiative and quantiative content analysis on social network use of Slow Food Movement. In this regard, the sample is chosen as the official profile and analyzed between in a three month period respectively March-May 2017. It was found that SFM develops particular techniques that appeal to the model of Sivitandies and Shah. The prominent skill in this regard was found as hyperlink abbreviation and use of multimedia elements. On the other hand, there are inadequacies in hashtag and interactivity use. The importance of this study is that it highlights and discusses how online activism can be engaged into a social movement. It also reveals current online activism skills of SFM and their effectiveness. Furthermore, it makes suggestions to enhance the related abilities and strengthen its voice on social networks.

Keywords: slow food movement, Twitter, internet, online activism

Procedia PDF Downloads 261
1721 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 119
1720 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 205
1719 Increasing Power Transfer Capacity of Distribution Networks Using Direct Current Feeders

Authors: Akim Borbuev, Francisco de León

Abstract:

Economic and population growth in densely-populated urban areas introduce major challenges to distribution system operators, planers, and designers. To supply added loads, utilities are frequently forced to invest in new distribution feeders. However, this is becoming increasingly more challenging due to space limitations and rising installation costs in urban settings. This paper proposes the conversion of critical alternating current (ac) distribution feeders into direct current (dc) feeders to increase the power transfer capacity by a factor as high as four. Current trends suggest that the return of dc transmission, distribution, and utilization are inevitable. Since a total system-level transformation to dc operation is not possible in a short period of time due to the needed huge investments and utility unreadiness, this paper recommends that feeders that are expected to exceed their limits in near future are converted to dc. The increase in power transfer capacity is achieved through several key differences between ac and dc power transmission systems. First, it is shown that underground cables can be operated at higher dc voltage than the ac voltage for the same dielectric stress in the insulation. Second, cable sheath losses, due to induced voltages yielding circulation currents, that can be as high as phase conductor losses under ac operation, are not present under dc. Finally, skin and proximity effects in conductors and sheaths do not exist in dc cables. The paper demonstrates that in addition to the increased power transfer capacity utilities substituting ac feeders by dc feeders could benefit from significant lower costs and reduced losses. Installing dc feeders is less expensive than installing new ac feeders even when new trenches are not needed. Case studies using the IEEE 342-Node Low Voltage Networked Test System quantify the technical and economic benefits of dc feeders.

Keywords: DC power systems, distribution feeders, distribution networks, power transfer capacity

Procedia PDF Downloads 112
1718 A Study of High Viscosity Oil-Gas Slug Flow Using Gamma Densitometer

Authors: Y. Baba, A. Archibong-Eso, H. Yeung

Abstract:

Experimental study of high viscosity oil-gas flows in horizontal pipelines published in literature has indicated that hydrodynamic slug flow is the dominant flow pattern observed. Investigations have shown that hydrodynamic slugging brings about high instabilities in pressure that can damage production facilities thereby making it inherent to study high viscous slug flow regime so as to improve the understanding of its flow dynamics. Most slug flow models used in the petroleum industry for the design of pipelines together with their closure relationships were formulated based on observations of low viscosity liquid-gas flows. New experimental investigations and data are therefore required to validate these models. In cases where these models underperform, improving upon or building new predictive models and correlations will also depend on the new experimental dataset and further understanding of the flow dynamics in high viscous oil-gas flows. In this study conducted at the Flow laboratory, Oil and Gas Engineering Centre of Cranfield University, slug flow variables such as pressure gradient, mean liquid holdup, frequency and slug length for oil viscosity ranging from 1..0 – 5.5 Pa.s are experimentally investigated and analysed. The study was carried out in a 0.076m ID pipe, two fast sampling gamma densitometer and pressure transducers (differential and point) were used to obtain experimental measurements. Comparison of the measured slug flow parameters to the existing slug flow prediction models available in the literature showed disagreement with high viscosity experimental data thus highlighting the importance of building new predictive models and correlations.

Keywords: gamma densitometer, mean liquid holdup, pressure gradient, slug frequency and slug length

Procedia PDF Downloads 314
1717 An Exploration of Cyberspace Security, Strategy for a New Era

Authors: Laxmi R. Kasaraneni

Abstract:

The Internet connects all the networks, including the nation’s critical infrastructure that are used extensively by not only a nation’s government and military to protect sensitive information and execute missions, but also the primary infrastructure that provides services that enable modern conveniences such as education, potable water, electricity, natural gas, and financial transactions. It has become the central nervous system for the government, the citizens, and the industries. When it is attacked, the effects can ripple far and wide impacts not only to citizens’ well-being but nation’s economy, civil infrastructure, and national security. As such, these critical services may be targeted by malicious hackers during cyber warfare, it is imperative to not only protect them and mitigate any immediate or potential threats, but to also understand the current or potential impacts beyond the IT networks or the organization. The Nation’s IT infrastructure which is now vital for communication, commerce, and control of our physical infrastructure, is highly vulnerable to attack. While existing technologies can address some vulnerabilities, fundamentally new architectures and technologies are needed to address the larger structural insecurities of an infrastructure developed in a more trusting time when mass cyber attacks were not foreseen. This research is intended to improve the core functions of the Internet and critical-sector information systems by providing a clear path to create a safe, secure, and resilient cyber environment that help stakeholders at all levels of government, and the private sector work together to develop the cybersecurity capabilities that are key to our economy, national security, and public health and safety. This research paper also emphasizes the present and future cyber security threats, the capabilities and goals of cyber attackers, a strategic concept and steps to implement cybersecurity for maximum effectiveness, enabling technologies, some strategic assumptions and critical challenges, and the future of cyberspace.

Keywords: critical challenges, critical infrastructure, cyber security, enabling technologies, national security

Procedia PDF Downloads 280
1716 From Homogeneous to Phase Separated UV-Cured Interpenetrating Polymer Networks: Influence of the System Composition on Properties and Microstructure

Authors: Caroline Rocco, Feyza Karasu, Céline Croutxé-Barghorn, Xavier Allonas, Maxime Lecompère, Gérard Riess, Yujing Zhang, Catarina Esteves, Leendert van der Ven, Rolf van Benthem Gijsbertus de With

Abstract:

Acrylates are widely used in UV-curing technology. Their high reactivity can, however, limit their conversion due to early vitrification. In addition, the free radical photopolymerization is known to be sensitive to oxygen inhibition leading to tacky surfaces. Although epoxides can lead to full polymerization, they are sensitive to humidity and exhibit low polymerization rate. To overcome the intrinsic limitations of both classes of monomers, Interpenetrating Polymer Networks (IPNs) can be synthesized. They consist of at least two cross linked polymers which are permanently entangled. They can be achieved under thermal and/or light induced polymerization in one or two steps approach. IPNs can display homogeneous to heterogeneous morphologies with various degrees of phase separation strongly linked to the monomer miscibility and also synthesis parameters. In this presentation, we synthesize UV-cured methacrylate - epoxide based IPNs with different chemical compositions in order to get a better understanding of their formation and phase separation. Miscibility before and during the photopolymerization, reaction kinetics, as well as mechanical properties and morphology have been investigated. The key parameters controlling the morphology and the phase separation, namely monomer miscibility and synthesis parameters have been identified. By monitoring the stiffness changes on the film surface, atomic force acoustic microscopy (AFAM) gave, in conjunction with polymerization kinetic profiles and thermomechanical properties, explanations and corroborated the miscibility predictions. When varying the methacrylate / epoxide ratio, it was possible to move from a miscible and highly-interpenetrated IPN to a totally immiscible and phase-separated one.

Keywords: investigation of properties and morphology, kinetics, phase separation, UV-cured IPNs

Procedia PDF Downloads 352
1715 Failure Analysis of Fuel Pressure Supply from an Aircraft Engine

Authors: M. Pilar Valles-gonzalez, Alejandro Gonzalez Meije, Ana Pastor Muro, Maria Garcia-Martinez, Beatriz Gonzalez Caballero

Abstract:

This paper studies a failure case of a fuel pressure supply tube from an aircraft engine. Multiple fracture cases of the fuel pressure control tube from aircraft engines have been reported. The studied set was composed of the mentioned tube, a welded connecting pipe, where the fracture has been produced, and a union nut. The fracture has been produced in one most critical zones of the tube, in a region next to the supporting body of the union nut to the connector. The tube material was X6CrNiTi18-10, an austenitic stainless steel. Chemical composition was determined using an X-Ray fluorescence spectrometer (XRF) and combustion equipment. Furthermore, the material has been mechanical, by hardness test, and microstructural characterized using a stereomicroscope and an optical microscope. The results confirmed that it is within specifications. To determine the macrofractographic features, a visual examination and a stereo microscope of the tube fracture surface have been carried out. The results revealed a tube plastic macrodeformation, surface damaged, and signs of a possible corrosion process. Fracture surface was also inspected by scanning electron microscopy (FE-SEM), equipped with a microanalysis system by X-ray dispersive energy (EDX), to determine the microfractographic features in order to find out the failure mechanism involved in the fracture. Fatigue striations, which are typical from a progressive fracture by a fatigue mechanism, have been observed. The origin of the fracture has been placed in defects located on the outer wall of the tube, leading to a final overload fracture.

Keywords: aircraft engine, fatigue, FE-SEM, fractography, fracture, fuel tube, microstructure, stainless steel

Procedia PDF Downloads 136
1714 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks

Authors: Muneeb Ullah, Daishihan, Xiadong Young

Abstract:

Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.

Keywords: classification, deep learning, medical images, CXR, GAN.

Procedia PDF Downloads 67
1713 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs

Authors: Anika Chebrolu

Abstract:

Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.

Keywords: drug design, multitargeticity, de-novo, reinforcement learning

Procedia PDF Downloads 71
1712 Infrastructure Development – Stages in Development

Authors: Seppo Sirkemaa

Abstract:

Information systems infrastructure is the basis of business systems and processes in the company. It should be a reliable platform for business processes and activities but also have the flexibility to change business needs. The development of an infrastructure that is robust, reliable, and flexible is a challenge. Understanding technological capabilities and business needs is a key element in the development of successful information systems infrastructure.

Keywords: development, information technology, networks, technology

Procedia PDF Downloads 98
1711 Bracing Applications for Improving the Earthquake Performance of Reinforced Concrete Structures

Authors: Diyar Yousif Ali

Abstract:

Braced frames, besides other structural systems, such as shear walls or moment resisting frames, have been a valuable and effective technique to increase structures against seismic loads. In wind or seismic excitations, diagonal members react as truss web elements which would afford tension or compression stresses. This study proposes to consider the effect of bracing diagonal configuration on values of base shear and displacement of building. Two models were created, and nonlinear pushover analysis was implemented. Results show that bracing members enhance the lateral load performance of the Concentric Braced Frame (CBF) considerably. The purpose of this article is to study the nonlinear response of reinforced concrete structures which contain hollow pipe steel braces as the major structural elements against earthquake loads. A five-storey reinforced concrete structure was selected in this study; two different reinforced concrete frames were considered. The first system was an un-braced frame, while the last one was a braced frame with diagonal bracing. Analytical modelings of the bare frame and braced frame were realized by means of SAP 2000. The performances of all structures were evaluated using nonlinear static analyses. From these analyses, the base shear and displacements were compared. Results are plotted in diagrams and discussed extensively, and the results of the analyses showed that the braced frame was seemed to capable of more lateral load carrying and had a high value for stiffness and lower roof displacement in comparison with the bare frame.

Keywords: reinforced concrete structures, pushover analysis, base shear, steel bracing

Procedia PDF Downloads 77
1710 Effects of Earthquake Induced Debris to Pedestrian and Community Street Network Resilience

Authors: Al-Amin, Huanjun Jiang, Anayat Ali

Abstract:

Reinforced concrete frames (RC), especially Ordinary RC frames, are prone to structural failures/collapse during seismic events, leading to a large proportion of debris from the structures, which obstructs adjacent areas, including streets. These blocked areas severely impede post-earthquake resilience. This study uses computational simulation (FEM) to investigate the amount of debris generated by the seismic collapse of an ordinary reinforced concrete moment frame building and its effects on the adjacent pedestrian and road network. A three-story ordinary reinforced concrete frame building, primarily designed for gravity load and earthquake resistance, was selected for analysis. Sixteen different ground motions were applied and scaled up until the total collapse of the tested building to evaluate the failure mode under various seismic events. Four types of collapse direction were identified through the analysis, namely aligned (positive and negative) and skewed (positive and negative), with aligned collapse being more predominant than skewed cases. The amount and distribution of debris around the collapsed building were assessed to investigate the interaction between collapsed buildings and adjacent street networks. An interaction was established between a building that collapsed in an aligned direction and the adjacent pedestrian walkway and narrow street located in an unplanned old city. The FEM model was validated against an existing shaking table test. The presented results can be utilized to simulate the interdependency between the debris generated from the collapse of seismic-prone buildings and the resilience of street networks. These findings provide insights for better disaster planning and resilient infrastructure development in earthquake-prone regions.

Keywords: building collapse, earthquake-induced debris, ORC moment resisting frame, street network

Procedia PDF Downloads 70
1709 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network

Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy

Abstract:

The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.

Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence

Procedia PDF Downloads 113
1708 A Numerical Model for Simulation of Blood Flow in Vascular Networks

Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia

Abstract:

An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.

Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system

Procedia PDF Downloads 255
1707 Neural Reshaping: The Plasticity of Human Brain and Artificial Intelligence in the Learning Process

Authors: Seyed-Ali Sadegh-Zadeh, Mahboobe Bahrami, Sahar Ahmadi, Seyed-Yaser Mousavi, Hamed Atashbar, Amir M. Hajiyavand

Abstract:

This paper presents an investigation into the concept of neural reshaping, which is crucial for achieving strong artificial intelligence through the development of AI algorithms with very high plasticity. By examining the plasticity of both human and artificial neural networks, the study uncovers groundbreaking insights into how these systems adapt to new experiences and situations, ultimately highlighting the potential for creating advanced AI systems that closely mimic human intelligence. The uniqueness of this paper lies in its comprehensive analysis of the neural reshaping process in both human and artificial intelligence systems. This comparative approach enables a deeper understanding of the fundamental principles of neural plasticity, thus shedding light on the limitations and untapped potential of both human and AI learning capabilities. By emphasizing the importance of neural reshaping in the quest for strong AI, the study underscores the need for developing AI algorithms with exceptional adaptability and plasticity. The paper's findings have significant implications for the future of AI research and development. By identifying the core principles of neural reshaping, this research can guide the design of next-generation AI technologies that can enhance human and artificial intelligence alike. These advancements will be instrumental in creating a new era of AI systems with unparalleled capabilities, paving the way for improved decision-making, problem-solving, and overall cognitive performance. In conclusion, this paper makes a substantial contribution by investigating the concept of neural reshaping and its importance for achieving strong AI. Through its in-depth exploration of neural plasticity in both human and artificial neural networks, the study unveils vital insights that can inform the development of innovative AI technologies with high adaptability and potential for enhancing human and AI capabilities alike.

Keywords: neural plasticity, brain adaptation, artificial intelligence, learning, cognitive reshaping

Procedia PDF Downloads 32
1706 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 87
1705 Evaluation of the Dry Compressive Strength of Refractory Bricks Developed from Local Kaolin

Authors: Olanrewaju Rotimi Bodede, Akinlabi Oyetunji

Abstract:

Modeling the dry compressive strength of sodium silicate bonded kaolin refractory bricks was studied. The materials used for this research work included refractory clay obtained from Ijero-Ekiti kaolin deposit on coordinates 7º 49´N and 5º 5´E, sodium silicate obtained from the open market in Lagos on coordinates 6°27′11″N 3°23′45″E all in the South Western part of Nigeria. The mineralogical composition of the kaolin clay was determined using the Energy Dispersive X-Ray Fluorescence Spectrometer (ED-XRF). The clay samples were crushed and sieved using the laboratory pulveriser, ball mill and sieve shaker respectively to obtain 100 μm diameter particles. Manual pipe extruder of dimension 30 mm diameter by 43.30 mm height was used to prepare the samples with varying percentage volume of sodium silicate 5 %, 7.5 % 10 %, 12.5 %, 15 %, 17.5 %, 20% and 22.5 % while kaolin and water were kept at 50 % and 5 % respectively for the comprehensive test. The samples were left to dry in the open laboratory atmosphere for 24 hours to remove moisture. The samples were then were fired in an electrically powered muffle furnace. Firing was done at the following temperatures; 700ºC, 750ºC, 800ºC, 850ºC, 900ºC, 950ºC, 1000ºC and 1100ºC. Compressive strength test was carried out on the dried samples using a Testometric Universal Testing Machine (TUTM) equipped with a computer and printer, optimum compression of 4.41 kN/mm2 was obtained at 12.5 % sodium silicate; the experimental results were modeled with MATLAB and Origin packages using polynomial regression equations that predicted the estimated values for dry compressive strength and later validated with Pearson’s rank correlation coefficient, thereby obtaining a very high positive correlation value of 0.97.

Keywords: dry compressive strength, kaolin, modeling, sodium silicate

Procedia PDF Downloads 441
1704 Industrial Prototype for Hydrogen Separation and Purification: Graphene Based-Materials Application

Authors: Juan Alfredo Guevara Carrio, Swamy Toolahalli Thipperudra, Riddhi Naik Dharmeshbhai, Sergio Graniero Echeverrigaray, Jose Vitorio Emiliano, Antonio Helio Castro

Abstract:

In order to advance the hydrogen economy, several industrial sectors can potentially benefit from the trillions of stimulus spending for post-coronavirus. Blending hydrogen into natural gas pipeline networks has been proposed as a means of delivering it during the early market development phase, using separation and purification technologies downstream to extract the pure H₂ close to the point of end-use. This first step has been mentioned around the world as an opportunity to use existing infrastructures for immediate decarbonisation pathways. Among current technologies used to extract hydrogen from mixtures in pipelines or liquid carriers, membrane separation can achieve the highest selectivity. The most efficient approach for the separation of H₂ from other substances by membranes is offered from the research of 2D layered materials due to their exceptional physical and chemical properties. Graphene-based membranes, with their distribution of pore sizes in nanometers and angstrom range, have shown fundamental and economic advantages over other materials. Their combination with the structure of ceramic and geopolymeric materials enabled the synthesis of nanocomposites and the fabrication of membranes with long-term stability and robustness in a relevant range of physical and chemical conditions. Versatile separation modules have been developed for hydrogen separation, which adaptability allows their integration in industrial prototypes for applications in heavy transport, steel, and cement production, as well as small installations at end-user stations of pipeline networks. The developed membranes and prototypes are a practical contribution to the technological challenge of supply pure H₂ for the mentioned industries as well as hydrogen energy-based fuel cells.

Keywords: graphene nano-composite membranes, hydrogen separation and purification, separation modules, indsutrial prototype

Procedia PDF Downloads 145
1703 Effects of Exhaust Gas Emitted by the Fleet on Public Health in the Region of Annaba (Algeria): Ecotoxicological Test on Durum Wheat (Triticum durum Desf.)

Authors: Aouissi Nora, Meksem Leila

Abstract:

This work focused on the study of air pollution generated by the transport sector in the region of Annaba. Our study is based on two parts: the first one concerns an epidemiological investigation in the area of Annaba situated in the east Algerian coast, which deals with the development of the fleet and its impact on public health. To get a more precise idea of the impact of road traffic on public health, we consulted the computing center office of the National Social Insurance Fund. The information we were given by this office refers to the number of reported asthma and heart disease after medical examination during the period 2006-2010. The second part was devoted to the study of the toxicity of exhaust gases on some physical and biochemical parameters of durum wheat (Triticum durum Desf.). After germination and three-leaf stage, the pots are placed in a box of volume (0,096 m3) having an input which is linked directly to the exhaust pipe of a truck, and an outlet to prevent asphyxiation plant. The experience deals with 30 pots: 10 pots are exposed for 5 minutes to exhaust smoke; the other 10 are exposed for 15 minutes, and the remaining 10 for 30 minutes. The epidemiological study shows that the levels of pollutants emitted by the fleet are responsible for the increase of people respiratory and cardiovascular diseases. As for biochemical analyses of vegetation, they clearly show the toxicity of pollutants emitted by the exhaust gases, with an increase in total protein, proline and stimulation of detoxification enzyme (catalase).

Keywords: air pollution, toxicity, epidemiology, biochemistry

Procedia PDF Downloads 321
1702 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 513
1701 Teaching Translation in Brazilian Universities: A Study about the Possible Impacts of Translators’ Comments on the Cyberspace about Translator Education

Authors: Erica Lima

Abstract:

The objective of this paper is to discuss relevant points about teaching translation in Brazilian universities and the possible impacts of blogs and social networks to translator education today. It is intended to analyze the curricula of Brazilian translation courses, contrasting them to information obtained from two social networking groups of great visibility in the area concerning essential characteristics to become a successful profession. Therefore, research has, as its main corpus, a few undergraduate translation programs’ syllabuses, as well as a few postings on social networks groups that specifically share professional opinions regarding the necessity for a translator to obtain a degree in translation to practice the profession. To a certain extent, such comments and their corresponding responses lead to the propagation of discourses which influence the ideas that aspiring translators and recent graduates end up having towards themselves and their undergraduate courses. The postings also show that many professionals do not have a clear position regarding the translator education; while refuting it, they also encourage “free” courses. It is thus observed that cyberspace constitutes, on the one hand, a place of mobilization of people in defense of similar ideas. However, on the other hand, it embodies a place of tension and conflict, in view of the fact that there are many participants and, as in any other situation of interlocution, disagreements may arise. From the postings, aspects related to professionalism were analyzed (including discussions about regulation), as well as questions about the classic dichotomies: theory/practice; art/technique; self-education/academic training. As partial result, the common interest regarding the valorization of the profession could be mentioned, although there is no consensus on the essential characteristics to be a good translator. It was also possible to observe that the set of socially constructed representations in the group reflects characteristics of the world situation of the translation courses (especially in some European countries and in the United States), which, in the first instance, does not accurately reflect the Brazilian idiosyncrasies of the area.

Keywords: cyberspace, teaching translation, translator education, university

Procedia PDF Downloads 367