Search results for: algorithmic sonic computation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 687

Search results for: algorithmic sonic computation

657 Customizable Sonic EEG Neurofeedback Environment to Train Self-Regulation of Momentary Mental and Emotional State

Authors: Cyril Kaplan, Nikola Jajcay

Abstract:

We developed purely sonic, musical based, highly customizable EEG neurofeedback environment designed to administer a new neurofeedback training protocol. The training protocol concentrates on improving the ability to switch between several mental states characterized by different levels of arousal, each of them correlated to specific brain wave activity patterns in several specific regions of neocortex. This paper describes the neurofeedback training environment we developed and its specificities, thus can be helpful as a manual to guide other neurofeedback users (both researchers and practitioners) interested in our editable open source program (available to download and usage under CC license). Responses and reaction of first trainees that used our environment are presented in this article. Combination of qualitative methods (thematic analysis of neurophenomenological insights of trainees and post-session semi-structured interviews) and quantitative methods (power spectra analysis of EEG recorded during the training) were employed to obtain a multifaceted view on our new training protocol.

Keywords: EEG neurofeedback, mixed methods, self-regulation, switch-between-states training

Procedia PDF Downloads 186
656 Symbolic Computation on Variable-Coefficient Non-Linear Dispersive Wave Equations

Authors: Edris Rawashdeh, I. Abu-Falahah, H. M. Jaradat

Abstract:

The variable-coefficient non-linear dispersive wave equation is investigated with the aid of symbolic computation. By virtue of a newly developed simplified bilinear method, multi-soliton solutions for such an equation have been derived. Effects of the inhomogeneities of media and nonuniformities of boundaries, depicted by the variable coefficients, on the soliton behavior are discussed with the aid of the characteristic curve method and graphical analysis.

Keywords: dispersive wave equations, multiple soliton solution, Hirota Bilinear Method, symbolic computation

Procedia PDF Downloads 424
655 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets

Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar

Abstract:

The study of the primary flow velocity and the self impinging secondary jet flow mixing is important from both the fundamental research and the application point of view. Real industrial configurations are more complex than simple shear layers present in idealized numerical thrust-vectoring models due to the presence of combustion, swirl and confinement. Predicting the flow features of self impinging secondary jets in a supersonic primary flow is complex owing to the fact that there are a large number of parameters involved. Earlier studies have been highlighted several key features of self impinging jets, but an extensive characterization in terms of jet interaction between supersonic flow and self impinging secondary sonic jets is still an active research topic. In this paper numerical studies have been carried out using a validated two-dimensional k-omega standard turbulence model for the design optimization of a thrust vector control system using shock induced self impinging secondary flow sonic jets using non-reacting flows. Efforts have been taken for examining the flow features of TVC system with various secondary jets at different divergent locations and jet impinging angles with the same inlet jet pressure and mass flow ratio. The results from the parametric studies reveal that in addition to the primary to the secondary mass flow ratio the characteristics of the self impinging secondary jets having bearing on an efficient thrust vectoring. We concluded that the self impinging secondary jet nozzles are better than single jet nozzle with the same secondary mass flow rate owing to the fact fixing of the self impinging secondary jet nozzles with proper jet angle could facilitate better thrust vectoring for any supersonic aerospace vehicle.

Keywords: fluidic thrust vectoring, rocket steering, supersonic to sonic jet interaction, TVC in aerospace vehicles

Procedia PDF Downloads 558
654 Significance of Archetypal Sounds: Exploring Mystical Practices of Uttarakhand Himalayas

Authors: Vineet Gairola

Abstract:

In many cultures, ethnographers have tried to set up a tight link between music and possession. However, they rarely informed us about the psychology of interactions between music and the possessed. Ancient myths and the archetypal find expression through the rituals practiced in Uttarakhand. In Uttarakhand (a part of the Central Himalayan region), an intriguing archetypal healing mechanism takes place. Some people get 'possessed' by a deity and shower blessings onto people gathered for a puja in a temple, where invocation of deity takes place through two archetypal drumming instruments played together named dhol-damaun. There is devi-doli (palanquin of the goddess) worship, which is carried on the shoulders of two people and is said to be tilting and shaking on its own. Archetypal in the modern mind survives effortlessly. The 'oceanic' of religious feelings are explored through an oral text of Dholsagar. The method of ethnography along with case-studies has been used. A substantial part of fieldwork was carried out in Rudraprayag, Uttarakhand. The research suggests that the collective unconscious is also sonic in nature, which is characterized by sounds and rhythms—not only symbols and images, as Dr. Jung suggested.

Keywords: archetypal, music, myth, mysticism, possession, sonic collective unconscious

Procedia PDF Downloads 98
653 A Perspective of Digital Formation in the Solar Community as a Prototype for Finding Sustainable Algorithmic Conditions on Earth

Authors: Kunihisa Kakumoto

Abstract:

“Purpose”: Global environmental issues are now being raised in a global dimension. By predicting sprawl phenomena beyond the limits of nature with algorithms, we can expect to protect our social life within the limits of nature. It turns out that the sustainable state of the planet now consists in maintaining a balance between the capabilities of nature and the possibilities of our social life. The amount of water on earth is finite. Sustainability is therefore highly dependent on water capacity. A certain amount of water is stored in the forest by planting and green space, and the amount of water can be considered in relation to the green space. CO2 is also absorbed by green plants. "Possible measurements and methods": The concept of the solar community has been introduced in technical papers on the occasion of many international conferences. The solar community concept is based on data collected from one solar model house. This algorithmic study simulates the amount of water stored by lush green vegetation. In addition, we calculated and compared the amount of CO2 emissions from the Taiyo Community and the amount of CO2 reduction from greening. Based on the trial calculation results of these solar communities, we are simulating the sustainable state of the earth as an algorithm trial calculation result. We believe that we should also consider the composition of this solar community group using digital technology as control technology. "Conclusion": We consider the solar community as a prototype for finding sustainable conditions for the planet. The role of water is very important as the supply capacity of water is limited. However, the circulation of social life is not constructed according to the mechanism of nature. This simulation trial calculation is explained using the total water supply volume as an example. According to this process, algorithmic calculations consider the total capacity of the water supply and the population and habitable numbers of the area. Green vegetated land is very important to keep enough water. Green vegetation is also very important to maintain CO2 balance. A simulation trial calculation is possible from the relationship between the CO2 emissions of the solar community and the amount of CO2 reduction due to greening. In order to find this total balance and sustainable conditions, the algorithmic simulation calculation takes into account lush vegetation and total water supply. Research to find sustainable conditions is done by simulating an algorithmic model of the solar community as a prototype. In this one prototype example, it's balanced. The activities of our social life must take place within the permissive limits of natural mechanisms. Of course, we aim for a more ideal balance by utilizing auxiliary digital control technology such as AI.

Keywords: solar community, sustainability, prototype, algorithmic simulation

Procedia PDF Downloads 33
652 Evaluation of Cast-in-Situ Pile Condition Using Pile Integrity Test

Authors: Mohammad I. Hossain, Omar F. Hamim

Abstract:

This paper presents a case study on a pile integrity test for assessing the integrity of piles as well as a physical dimension (e.g., cross-sectional area, length), continuity, and consistency of the pile materials. The recent boom in the socio-economic condition of Bangladesh has given rise to the building of high-rise commercial and residential infrastructures. The advantage of the pile integrity test lies in the fact that it is possible to get an approximate indication regarding the quality of the sub-structure before commencing the construction of the super-structure. This paper aims at providing a classification of cast-in-situ piles based on characteristic reflectograms obtained using the Sonic Integrity Testing program for the sub-soil condition of Narayanganj, Bangladesh. The piles have been classified as 'Pile Type-1', 'Pile Type-2', 'Pile Type-3', 'Pile type-4', 'Pile Type-5' or 'Pile Type-6' from the visual observations of reflections from the generated stress waves by striking the pile head with a handheld hammer. With respect to construction quality and integrity, piles have been further classified into three distinct categories, i.e., satisfactory, may be satisfactory, and unsatisfactory.

Keywords: cast-in-situ piles, characteristic reflectograms, pile integrity test, sonic integrity testing program

Procedia PDF Downloads 93
651 Spatial Direct Numerical Simulation of Instability Waves in Hypersonic Boundary Layers

Authors: Jayahar Sivasubramanian

Abstract:

Understanding laminar-turbulent transition process in hyper-sonic boundary layers is crucial for designing viable high speed flight vehicles. The study of transition becomes particularly important in the high speed regime due to the effect of transition on aerodynamic performance and heat transfer. However, even after many years of research, the transition process in hyper-sonic boundary layers is still not understood. This lack of understanding of the physics of the transition process is a major impediment to the development of reliable transition prediction methods. Towards this end, spatial Direct Numerical Simulations are conducted to investigate the instability waves generated by a localized disturbance in a hyper-sonic flat plate boundary layer. In order to model a natural transition scenario, the boundary layer was forced by a short duration (localized) pulse through a hole on the surface of the flat plate. The pulse disturbance developed into a three-dimensional instability wave packet which consisted of a wide range of disturbance frequencies and wave numbers. First, the linear development of the wave packet was studied by forcing the flow with low amplitude (0.001% of the free-stream velocity). The dominant waves within the resulting wave packet were identified as two-dimensional second mode disturbance waves. Hence the wall-pressure disturbance spectrum exhibited a maximum at the span wise mode number k = 0. The spectrum broadened in downstream direction and the lower frequency first mode oblique waves were also identified in the spectrum. However, the peak amplitude remained at k = 0 which shifted to lower frequencies in the downstream direction. In order to investigate the nonlinear transition regime, the flow was forced with a higher amplitude disturbance (5% of the free-stream velocity). The developing wave packet grows linearly at first before reaching the nonlinear regime. The wall pressure disturbance spectrum confirmed that the wave packet developed linearly at first. The response of the flow to the high amplitude pulse disturbance indicated the presence of a fundamental resonance mechanism. Lower amplitude secondary peaks were also identified in the disturbance wave spectrum at approximately half the frequency of the high amplitude frequency band, which would be an indication of a sub-harmonic resonance mechanism. The disturbance spectrum indicates, however, that fundamental resonance is much stronger than sub-harmonic resonance.

Keywords: boundary layer, DNS, hyper sonic flow, instability waves, wave packet

Procedia PDF Downloads 155
650 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 334
649 Comparative Performance Analysis for Selected Behavioral Learning Systems versus Ant Colony System Performance: Neural Network Approach

Authors: Hassan M. H. Mustafa

Abstract:

This piece of research addresses an interesting comparative analytical study. Which considers two concepts of diverse algorithmic computational intelligence approaches related tightly with Neural and Non-Neural Systems. The first algorithmic intelligent approach concerned with observed obtained practical results after three neural animal systems’ activities. Namely, they are Pavlov’s, and Thorndike’s experimental work. Besides a mouse’s trial during its movement inside figure of eight (8) maze, to reach an optimal solution for reconstruction problem. Conversely, second algorithmic intelligent approach originated from observed activities’ results for Non-Neural Ant Colony System (ACS). These results obtained after reaching an optimal solution while solving Traveling Sales-man Problem (TSP). Interestingly, the effect of increasing number of agents (either neurons or ants) on learning performance shown to be similar for both introduced systems. Finally, performance of both intelligent learning paradigms shown to be in agreement with learning convergence process searching for least mean square error LMS algorithm. While its application for training some Artificial Neural Network (ANN) models. Accordingly, adopted ANN modeling is a relevant and realistic tool to investigate observations and analyze performance for both selected computational intelligence (biological behavioral learning) systems.

Keywords: artificial neural network modeling, animal learning, ant colony system, traveling salesman problem, computational biology

Procedia PDF Downloads 444
648 Parallel Computation of the Covariance-Matrix

Authors: Claude Tadonki

Abstract:

We address the issues related to the computation of the covariance matrix. This matrix is likely to be ill conditioned following its canonical expression, thus consequently raises serious numerical issues. The underlying linear system, which therefore should be solved by means of iterative approaches, becomes computationally challenging. A huge number of iterations is expected in order to reach an acceptable level of convergence, necessary to meet the required accuracy of the computation. In addition, this linear system needs to be solved at each iteration following the general form of the covariance matrix. Putting all together, its comes that we need to compute as fast as possible the associated matrix-vector product. This is our purpose in the work, where we consider and discuss skillful formulations of the problem, then propose a parallel implementation of the matrix-vector product involved. Numerical and performance oriented discussions are provided based on experimental evaluations.

Keywords: covariance-matrix, multicore, numerical computing, parallel computing

Procedia PDF Downloads 284
647 Embedded Hybrid Intuition: A Deep Learning and Fuzzy Logic Approach to Collective Creation and Computational Assisted Narratives

Authors: Roberto Cabezas H

Abstract:

The current work shows the methodology developed to create narrative lighting spaces for the multimedia performance piece 'cluster: the vanished paradise.' This empirical research is focused on exploring unconventional roles for machines in subjective creative processes, by delving into the semantics of data and machine intelligence algorithms in hybrid technological, creative contexts to expand epistemic domains trough human-machine cooperation. The creative process in scenic and performing arts is guided mostly by intuition; from that idea, we developed an approach to embed collective intuition in computational creative systems, by joining the properties of Generative Adversarial Networks (GAN’s) and Fuzzy Clustering based on a semi-supervised data creation and analysis pipeline. The model makes use of GAN’s to learn from phenomenological data (data generated from experience with lighting scenography) and algorithmic design data (augmented data by procedural design methods), fuzzy logic clustering is then applied to artificially created data from GAN’s to define narrative transitions built on membership index; this process allowed for the creation of simple and complex spaces with expressive capabilities based on position and light intensity as the parameters to guide the narrative. Hybridization comes not only from the human-machine symbiosis but also on the integration of different techniques for the implementation of the aided design system. Machine intelligence tools as proposed in this work are well suited to redefine collaborative creation by learning to express and expand a conglomerate of ideas and a wide range of opinions for the creation of sensory experiences. We found in GAN’s and Fuzzy Logic an ideal tool to develop new computational models based on interaction, learning, emotion and imagination to expand the traditional algorithmic model of computation.

Keywords: fuzzy clustering, generative adversarial networks, human-machine cooperation, hybrid collective data, multimedia performance

Procedia PDF Downloads 115
646 Noise Barrier Technique as a Way to Improve the Sonic Urban Environment along Existing Roadways Assessment: El-Gish Road Street, Alexandria, Egypt

Authors: Nihal Atif Salim

Abstract:

To improve the quality of life in cities, a variety of interventions are used. Noise is a substantial and important sort of pollution that has a negative impact on the urban environment and human health. According to the complaint survey, it ranks second among environmental contamination complaints (conducted by EEAA in 2019). The most significant source of noise in the city is traffic noise. In order to improve the sound urban environment, many physical techniques are applied. In the local area, noise barriers are considered as one of the most appropriate physical techniques along existing traffic routes. Alexandria is Egypt's second-largest city after Cairo. It is located along the Mediterranean Sea, and El- Gish Road is one of the city's main arteries. It impacts the waterfront promenade that extends along with the city by a high level of traffic noise. The purpose of this paper is to clarify the design considerations for the most appropriate noise barrier type along with the promenade, with the goal of improving the Quality of Life (QOL) and the sonic urban environment specifically. The proposed methodology focuses on how noise affects human perception and the environment. Then it delves into the various physical noise control approaches. After that, the paper discusses sustainable design decisions making. Finally, look into the importance of incorporating sustainability into design decisions making. Three stages will be followed in the case study. The first stage involves doing a site inspection and using specific sound measurement equipment (a noise level meter) to measure the noise level along the promenade at many sites, and the findings will be shown on a noise map. The second step is to inquire about the site's user experience. The third step is to investigate the various types of noise barriers and their effects on QOL along existing routes in order to select the most appropriate type. The goal of this research is to evaluate the suitable design of noise barriers that fulfill environmental and social perceptions while maintaining a balanced approach to the noise issue in order to improve QOL along existing roadways in the local area.

Keywords: noise pollution, sonic urban environment, traffic noise, noise barrier, acoustic sustainability, noise reduction techniques

Procedia PDF Downloads 111
645 An Intrusion Detection Systems Based on K-Means, K-Medoids and Support Vector Clustering Using Ensemble

Authors: A. Mohammadpour, Ebrahim Najafi Kajabad, Ghazale Ipakchi

Abstract:

Presently, computer networks’ security rise in importance and many studies have also been conducted in this field. By the penetration of the internet networks in different fields, many things need to be done to provide a secure industrial and non-industrial network. Fire walls, appropriate Intrusion Detection Systems (IDS), encryption protocols for information sending and receiving, and use of authentication certificated are among things, which should be considered for system security. The aim of the present study is to use the outcome of several algorithms, which cause decline in IDS errors, in the way that improves system security and prevents additional overload to the system. Finally, regarding the obtained result we can also detect the amount and percentage of more sub attacks. By running the proposed system, which is based on the use of multi-algorithmic outcome and comparing that by the proposed single algorithmic methods, we observed a 78.64% result in attack detection that is improved by 3.14% than the proposed algorithms.

Keywords: intrusion detection systems, clustering, k-means, k-medoids, SV clustering, ensemble

Procedia PDF Downloads 192
644 The Intersection/Union Region Computation for Drosophila Brain Images Using Encoding Schemes Based on Multi-Core CPUs

Authors: Ming-Yang Guo, Cheng-Xian Wu, Wei-Xiang Chen, Chun-Yuan Lin, Yen-Jen Lin, Ann-Shyn Chiang

Abstract:

With more and more Drosophila Driver and Neuron images, it is an important work to find the similarity relationships among them as the functional inference. There is a general problem that how to find a Drosophila Driver image, which can cover a set of Drosophila Driver/Neuron images. In order to solve this problem, the intersection/union region for a set of images should be computed at first, then a comparison work is used to calculate the similarities between the region and other images. In this paper, three encoding schemes, namely Integer, Boolean, Decimal, are proposed to encode each image as a one-dimensional structure. Then, the intersection/union region from these images can be computed by using the compare operations, Boolean operators and lookup table method. Finally, the comparison work is done as the union region computation, and the similarity score can be calculated by the definition of Tanimoto coefficient. The above methods for the region computation are also implemented in the multi-core CPUs environment with the OpenMP. From the experimental results, in the encoding phase, the performance by the Boolean scheme is the best than that by others; in the region computation phase, the performance by Decimal is the best when the number of images is large. The speedup ratio can achieve 12 based on 16 CPUs. This work was supported by the Ministry of Science and Technology under the grant MOST 106-2221-E-182-070.

Keywords: Drosophila driver image, Drosophila neuron images, intersection/union computation, parallel processing, OpenMP

Procedia PDF Downloads 204
643 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network

Authors: Ziying Wu, Danfeng Yan

Abstract:

Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.

Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network

Procedia PDF Downloads 78
642 Moral Brand Machines: Towards a Conceptual Framework

Authors: Khaled Ibrahim, Mathew Parackal, Damien Mather, Paul Hansen

Abstract:

The integration between marketing and technology has given brands unprecedented opportunities to reach accurate customer data and competence to change customers' behaviour. Technology has generated a transformation within brands from traditional branding to algorithmic branding. However, brands have utilised customer data in non-cognitive programmatic targeting. This algorithmic persuasion may be effective in reaching the targeted audience. But it may encounter a moral conflict simultaneously, as it might not consider our social principles. Moral branding is a critical topic; particularly, with the increasing interest in commercial settings to teaching machines human morals, e.g., autonomous vehicles and chatbots; however, it is understudied in the marketing literature. Therefore, this paper aims to investigate the recent moral branding literature. Furthermore, applying human-like mind theory as initial framing to this paper explores a more comprehensive concept involving human morals, machine behaviour, and branding.

Keywords: brand machines, conceptual framework, moral branding, moral machines

Procedia PDF Downloads 137
641 TCTN2 Maintains the Transition Zone Stability and Controls the Entrance of the Ciliary Membrane Protein into Primary Cilia

Authors: Rueyhung Weng, Chia-En Huang, Jung-Chi-Liao

Abstract:

The transition zone (TZ) serves as a diffusion barrier to regulate the ins and outs of the proteins recruited to the primary cilia. TCTN2 is one of the TZ proteins and its mutation causes Joubert syndrome, a serious multi-organ disease. Despite its important medical relevance, the functions of TCTN2 remain elusive. Here we created a TCTN2 gene deleted retinal pigment epithelial cells (RPE1) using CRISPR/Cas9-based genome editing technique and used this knockout line to reveal roles of TCTN2. TCTN2 knockout RPE1 cells displayed a significantly reduced ciliogenesis or a shortened primary cilium length in the cilium-remaining population. Intraflagellar transport protein IFT88 aberrantly accumulated at the tip of TCTN2 deficient cells. Guanine nucleotide exchange factor Arl13B was mostly absent from the ciliary compartment, with a small population localizing at the ciliary tip. The deficient TZ was corroborated with the mislocalization of two other TZ proteins TMEM67 and MKS1. In addition, TCTN2 deficiency induced TZ impairment led to the suppression of Sonic hedgehog signaling in response to Smoothened (Smo) agonist. Together, depletion of TCTN2 destabilizes other TZ proteins and considerably alters the localization of key transport and signaling-associated proteins, including IFT88, Arl13B, and Smo.

Keywords: CRISPR/Cas9, primary cilia, Sonic hedgehog signaling, transition zone

Procedia PDF Downloads 313
640 The Human Process of Trust in Automated Decisions and Algorithmic Explainability as a Fundamental Right in the Exercise of Brazilian Citizenship

Authors: Paloma Mendes Saldanha

Abstract:

Access to information is a prerequisite for democracy while also guiding the material construction of fundamental rights. The exercise of citizenship requires knowing, understanding, questioning, advocating for, and securing rights and responsibilities. In other words, it goes beyond mere active electoral participation and materializes through awareness and the struggle for rights and responsibilities in the various spaces occupied by the population in their daily lives. In times of hyper-cultural connectivity, active citizenship is shaped through ethical trust processes, most often established between humans and algorithms. Automated decisions, so prevalent in various everyday situations, such as purchase preference predictions, virtual voice assistants, reduction of accidents in autonomous vehicles, content removal, resume selection, etc., have already found their place as a normalized discourse that sometimes does not reveal or make clear what violations of fundamental rights may occur when algorithmic explainability is lacking. In other words, technological and market development promotes a normalization for the use of automated decisions while silencing possible restrictions and/or breaches of rights through a culturally modeled, unethical, and unexplained trust process, which hinders the possibility of the right to a healthy, transparent, and complete exercise of citizenship. In this context, the article aims to identify the violations caused by the absence of algorithmic explainability in the exercise of citizenship through the construction of an unethical and silent trust process between humans and algorithms in automated decisions. As a result, it is expected to find violations of constitutionally protected rights such as privacy, data protection, and transparency, as well as the stipulation of algorithmic explainability as a fundamental right in the exercise of Brazilian citizenship in the era of virtualization, facing a threefold foundation called trust: culture, rules, and systems. To do so, the author will use a bibliographic review in the legal and information technology fields, as well as the analysis of legal and official documents, including national documents such as the Brazilian Federal Constitution, as well as international guidelines and resolutions that address the topic in a specific and necessary manner for appropriate regulation based on a sustainable trust process for a hyperconnected world.

Keywords: artificial intelligence, ethics, citizenship, trust

Procedia PDF Downloads 31
639 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem

Authors: Y. Wang

Abstract:

The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.

Keywords: frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem

Procedia PDF Downloads 194
638 An Efficient Book Keeping Strategy for the Formation of the Design Matrix in Geodetic Network Adjustment

Authors: O. G. Omogunloye, J. B. Olaleye, O. E. Abiodun, J. O. Odumosu, O. G. Ajayi

Abstract:

The focus of the study is to proffer easy formulation and computation of least square observation equation’s design matrix by using an efficient book keeping strategy. Usually, for a large network of many triangles and stations, a rigorous task is involved in the computation and placement of the values of the differentials of each observation with respect to its station coordinates (latitude and longitude), in their respective rows and columns. The efficient book keeping strategy seeks to eliminate or reduce this rigorous task involved, especially in large network, by simple skillful arrangement and development of a short program written in the Matlab environment, the formulation and computation of least square observation equation’s design matrix can be easily achieved.

Keywords: design, differential, geodetic, matrix, network, station

Procedia PDF Downloads 314
637 Aerodynamic Coefficients Prediction from Minimum Computation Combinations Using OpenVSP Software

Authors: Marine Segui, Ruxandra Mihaela Botez

Abstract:

OpenVSP is an aerodynamic solver developed by National Aeronautics and Space Administration (NASA) that allows building a reliable model of an aircraft. This software performs an aerodynamic simulation according to the angle of attack of the aircraft makes between the incoming airstream, and its speed. A reliable aerodynamic model of the Cessna Citation X was designed but it required a lot of computation time. As a consequence, a prediction method was established that allowed predicting lift and drag coefficients for all Mach numbers and for all angles of attack, exclusively for stall conditions, from a computation of three angles of attack and only one Mach number. Aerodynamic coefficients given by the prediction method for a Cessna Citation X model were finally compared with aerodynamics coefficients obtained using a complete OpenVSP study.

Keywords: aerodynamic, coefficient, cruise, improving, longitudinal, openVSP, solver, time

Procedia PDF Downloads 202
636 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding

Authors: Seongsoo Lee

Abstract:

Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.

Keywords: motion estimation, test zone search, high efficiency video coding, processing element, optimization

Procedia PDF Downloads 333
635 Parallel Evaluation of Sommerfeld Integrals for Multilayer Dyadic Green's Function

Authors: Duygu Kan, Mehmet Cayoren

Abstract:

Sommerfeld-integrals (SIs) are commonly encountered in electromagnetics problems involving analysis of antennas and scatterers embedded in planar multilayered media. Generally speaking, the analytical solution of SIs is unavailable, and it is well known that numerical evaluation of SIs is very time consuming and computationally expensive due to the highly oscillating and slowly decaying nature of the integrands. Therefore, fast computation of SIs has a paramount importance. In this paper, a parallel code has been developed to speed up the computation of SI in the framework of calculation of dyadic Green’s function in multilayered media. OpenMP shared memory approach is used to parallelize the SI algorithm and resulted in significant time savings. Moreover accelerating the computation of dyadic Green’s function is discussed based on the parallel SI algorithm developed.

Keywords: Sommerfeld-integrals, multilayer dyadic Green’s function, OpenMP, shared memory parallel programming

Procedia PDF Downloads 213
634 Exploiting Non-Uniform Utility of Computing: A Case Study

Authors: Arnab Sarkar, Michael Huang, Chuang Ren, Jun Li

Abstract:

The increasing importance of computing in modern society has brought substantial growth in the demand for more computational power. In some problem domains such as scientific simulations, available computational power still sets a limit on what can be practically explored in computation. For many types of code, there is non-uniformity in the utility of computation. That is not every piece of computation contributes equally to the quality of the result. If this non-uniformity is understood well and exploited effectively, we can much more effectively utilize available computing power. In this paper, we discuss a case study of exploring such non-uniformity in a particle-in-cell simulation platform. We find both the existence of significant non-uniformity and that it is generally straightforward to exploit it. We show the potential of order-of-magnitude effective performance gain while keeping the comparable quality of output. We also discuss some challenges in both the practical application of the idea and evaluation of its impact.

Keywords: approximate computing, landau damping, non uniform utility computing, particle-in-cell

Procedia PDF Downloads 229
633 Compensation Strategies and Their Effects on Employees' Motivation and Organizational Citizenship Behaviour in Some Manufacturing Companies in Lagos, Nigeria

Authors: Ade Oyedijo

Abstract:

This paper reports the findings of a study on the strategic and organizational antecedents and effects of two opposing pay patterns used by some manufacturing companies in Lagos Nigeria with particular reference to the behavioural correlates of the pay strategies considered. The assumed relationship between pay strategies and some organizational correlates such as business and corporate strategies and firm size was considered problematic in view of their likely implications for employee motivation and citizenship behaviour and firm performance. The survey research method was used for the study. Structured, close ended questions were used to collect primary data from the respondents. A multipart Likert scale was used to measure the pay orientations of the respondent firms and the job and organizational involvement of the respondent employees. Utilizing hierarchical linear regression method and "t-test" to analyze the data obtained from 48 manufacturing companies of various sizes and strategies, it was found that the dominant pattern of employee compensation in the sampled manufacturing companies. The study also revealed that the choice of a pay strategy was strongly influenced by organizational size as well as the type of business and corporate level strategies adopted by afirm. Firms pursuing a strategy of related and unrelated diversification are more likely to adopt the algorithmic compensation system than single product firms because of their relatively larger size and scope. However; firms that pursue a competitive advantage through a business level strategy of cost efficiency are more likely to use the experiential, variable pay strategy. The study found that an algorithmic compensation strategy is as effective as experiential compensation strategy in the promotion of organizational citizenship behaviour and motivation of employees.

Keywords: compensation, corporate strategy, business strategy, motivation, citizenship behaviour, algorithmic, experiential, organizational commitment, work environment

Procedia PDF Downloads 360
632 A Parallel Computation Based on GPU Programming for a 3D Compressible Fluid Flow Simulation

Authors: Sugeng Rianto, P.W. Arinto Yudi, Soemarno Muhammad Nurhuda

Abstract:

A computation of a 3D compressible fluid flow for virtual environment with haptic interaction can be a non-trivial issue. This is especially how to reach good performances and balancing between visualization, tactile feedback interaction, and computations. In this paper, we describe our approach of computation methods based on parallel programming on a GPU. The 3D fluid flow solvers have been developed for smoke dispersion simulation by using combinations of the cubic interpolated propagation (CIP) based fluid flow solvers and the advantages of the parallelism and programmability of the GPU. The fluid flow solver is generated in the GPU-CPU message passing scheme to get rapid development of haptic feedback modes for fluid dynamic data. A rapid solution in fluid flow solvers is developed by applying cubic interpolated propagation (CIP) fluid flow solvers. From this scheme, multiphase fluid flow equations can be solved simultaneously. To get more acceleration in the computation, the Navier-Stoke Equations (NSEs) is packed into channels of texel, where computation models are performed on pixels that can be considered to be a grid of cells. Therefore, despite of the complexity of the obstacle geometry, processing on multiple vertices and pixels can be done simultaneously in parallel. The data are also shared in global memory for CPU to control the haptic in providing kinaesthetic interaction and felling. The results show that GPU based parallel computation approaches provide effective simulation of compressible fluid flow model for real-time interaction in 3D computer graphic for PC platform. This report has shown the feasibility of a new approach of solving the compressible fluid flow equations on the GPU. The experimental tests proved that the compressible fluid flowing on various obstacles with haptic interactions on the few model obstacles can be effectively and efficiently simulated on the reasonable frame rate with a realistic visualization. These results confirm that good performances and balancing between visualization, tactile feedback interaction, and computations can be applied successfully.

Keywords: CIP, compressible fluid, GPU programming, parallel computation, real-time visualisation

Procedia PDF Downloads 403
631 A CORDIC Based Design Technique for Efficient Computation of DCT

Authors: Deboraj Muchahary, Amlan Deep Borah Abir J. Mondal, Alak Majumder

Abstract:

A discrete cosine transform (DCT) is described and a technique to compute it using fast Fourier transform (FFT) is developed. In this work, DCT of a finite length sequence is obtained by incorporating CORDIC methodology in radix-2 FFT algorithm. The proposed methodology is simple to comprehend and maintains a regular structure, thereby reducing computational complexity. DCTs are used extensively in the area of digital processing for the purpose of pattern recognition. So the efficient computation of DCT maintaining a transparent design flow is highly solicited.

Keywords: DCT, DFT, CORDIC, FFT

Procedia PDF Downloads 444
630 Numerical Computation of Specific Absorption Rate and Induced Current for Workers Exposed to Static Magnetic Fields of MRI Scanners

Authors: Sherine Farrag

Abstract:

Currently-used MRI scanners in Cairo City possess static magnetic field (SMF) that varies from 0.25 up to 3T. More than half of them possess SMF of 1.5T. The SMF of the magnet determine the diagnostic power of a scanner, but not worker's exposure profile. This research paper presents an approach for numerical computation of induced electric fields and SAR values by estimation of fringe static magnetic fields. Iso-gauss line of MR was mapped and a polynomial function of the 7th degree was generated and tested. Induced current field due to worker motion in the SMF and SAR values for organs and tissues have been calculated. Results illustrate that the computation tool used permits quick accurate MRI iso-gauss mapping and calculation of SAR values which can then be used for assessment of occupational exposure profile of MRI operators.

Keywords: MRI occupational exposure, MRI safety, induced current density, specific absorption rate, static magnetic fields

Procedia PDF Downloads 403
629 Computation of Natural Logarithm Using Abstract Chemical Reaction Networks

Authors: Iuliia Zarubiieva, Joyun Tseng, Vishwesh Kulkarni

Abstract:

Recent researches has focused on nucleic acids as a substrate for designing biomolecular circuits for in situ monitoring and control. A common approach is to express them by a set of idealised abstract chemical reaction networks (ACRNs). Here, we present new results on how abstract chemical reactions, viz., catalysis, annihilation and degradation, can be used to implement circuit that accurately computes logarithm function using the method of Arithmetic-Geometric Mean (AGM), which has not been previously used in conjunction with ACRNs.

Keywords: chemical reaction networks, ratio computation, stability, robustness

Procedia PDF Downloads 134
628 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing

Authors: Abdullah Bal, Sevdenur Bal

Abstract:

This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.

Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware

Procedia PDF Downloads 475