Search results for: fast-regional convolutional neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3626

Search results for: fast-regional convolutional neural networks

2126 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 58
2125 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis

Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin

Abstract:

Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.

Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve

Procedia PDF Downloads 318
2124 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection

Authors: Mahshid Arabi

Abstract:

With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.

Keywords: data protection, digital technologies, information security, modern management

Procedia PDF Downloads 10
2123 Gender Recognition with Deep Belief Networks

Authors: Xiaoqi Jia, Qing Zhu, Hao Zhang, Su Yang

Abstract:

A gender recognition system is able to tell the gender of the given person through a few of frontal facial images. An effective gender recognition approach enables to improve the performance of many other applications, including security monitoring, human-computer interaction, image or video retrieval and so on. In this paper, we present an effective method for gender classification task in frontal facial images based on deep belief networks (DBNs), which can pre-train model and improve accuracy a little bit. Our experiments have shown that the pre-training method with DBNs for gender classification task is feasible and achieves a little improvement of accuracy on FERET and CAS-PEAL-R1 facial datasets.

Keywords: gender recognition, beep belief net-works, semi-supervised learning, greedy-layer wise RBMs

Procedia PDF Downloads 427
2122 Modeling of Surface Roughness in Hard Turning of DIN 1.2210 Cold Work Tool Steel with Ceramic Tools

Authors: Mehmet Erdi Korkmaz, Mustafa Günay

Abstract:

Nowadays, grinding is frequently replaced with hard turning for reducing set up time and higher accuracy. This paper focused on mathematical modeling of average surface roughness (Ra) in hard turning of AISI L2 grade (DIN 1.2210) cold work tool steel with ceramic tools. The steel was hardened to 60±1 HRC after the heat treatment process. Cutting speed, feed rate, depth of cut and tool nose radius was chosen as the cutting conditions. The uncoated ceramic cutting tools were used in the machining experiments. The machining experiments were performed according to Taguchi L27 orthogonal array on CNC lathe. Ra values were calculated by averaging three roughness values obtained from three different points of machined surface. The influences of cutting conditions on surface roughness were evaluated as statistical and experimental. The analysis of variance (ANOVA) with 95% confidence level was applied for statistical analysis of experimental results. Finally, mathematical models were developed using the artificial neural networks (ANN). ANOVA results show that feed rate is the dominant factor affecting surface roughness, followed by tool nose radius and cutting speed.

Keywords: ANN, hard turning, DIN 1.2210, surface roughness, Taguchi method

Procedia PDF Downloads 351
2121 Survey on Energy Efficient Routing Protocols in Mobile Ad-Hoc Networks

Authors: Swapnil Singh, Sanjoy Das

Abstract:

Mobile Ad-Hoc Network (MANET) is infrastructure less networks dynamically formed by autonomous system of mobile nodes that are connected via wireless links. Mobile nodes communicate with each other on the fly. In this network each node also acts as a router. The battery power and the bandwidth are very scarce resources in this network. The network lifetime and connectivity of nodes depends on battery power. Therefore, energy is a valuable constraint which should be efficiently used. In this paper, we survey various energy efficient routing protocol. The energy efficient routing protocols are classified on the basis of approaches they use to minimize the energy consumption. The purpose of this paper is to facilitate the research work and combine the existing solution and to develop a more energy efficient routing mechanism.

Keywords: delaunay triangulation, deployment, energy efficiency, MANET

Procedia PDF Downloads 590
2120 Mutiple Medical Landmark Detection on X-Ray Scan Using Reinforcement Learning

Authors: Vijaya Yuvaram Singh V M, Kameshwar Rao J V

Abstract:

The challenge with development of neural network based methods for medical is the availability of data. Anatomical landmark detection in the medical domain is a process to find points on the x-ray scan report of the patient. Most of the time this task is done manually by trained professionals as it requires precision and domain knowledge. Traditionally object detection based methods are used for landmark detection. Here, we utilize reinforcement learning and query based method to train a single agent capable of detecting multiple landmarks. A deep Q network agent is trained to detect single and multiple landmarks present on hip and shoulder from x-ray scan of a patient. Here a single agent is trained to find multiple landmark making it superior to having individual agents per landmark. For the initial study, five images of different patients are used as the environment and tested the agents performance on two unseen images.

Keywords: reinforcement learning, medical landmark detection, multi target detection, deep neural network

Procedia PDF Downloads 126
2119 Initial Dip: An Early Indicator of Neural Activity in Functional Near Infrared Spectroscopy Waveform

Authors: Mannan Malik Muhammad Naeem, Jeong Myung Yung

Abstract:

Functional near infrared spectroscopy (fNIRS) has a favorable position in non-invasive brain imaging techniques. The concentration change of oxygenated hemoglobin and de-oxygenated hemoglobin during particular cognitive activity is the basis for this neuro-imaging modality. Two wavelengths of near-infrared light can be used with modified Beer-Lambert law to explain the indirect status of neuronal activity inside brain. The temporal resolution of fNIRS is very good for real-time brain computer-interface applications. The portability, low cost and an acceptable temporal resolution of fNIRS put it on a better position in neuro-imaging modalities. In this study, an optimization model for impulse response function has been used to estimate/predict initial dip using fNIRS data. In addition, the activity strength parameter related to motor based cognitive task has been analyzed. We found an initial dip that remains around 200-300 millisecond and better localize neural activity.

Keywords: fNIRS, brain-computer interface, optimization algorithm, adaptive signal processing

Procedia PDF Downloads 208
2118 Incorporation of Growth Factors onto Hydrogels via Peptide Mediated Binding for Development of Vascular Networks

Authors: Katie Kilgour, Brendan Turner, Carly Catella, Michael Daniele, Stefano Menegatti

Abstract:

In vivo, the extracellular matrix (ECM) provides biochemical and mechanical properties that are instructional to resident cells to form complex tissues with characteristics to develop and support vascular networks. In vitro, the development of vascular networks can be guided by biochemical patterning of substrates via spatial distribution and display of peptides and growth factors to prompt cell adhesion, differentiation, and proliferation. We have developed a technique utilizing peptide ligands that specifically bind vascular endothelial growth factor (VEGF), erythropoietin (EPO), or angiopoietin-1 (ANG1) to spatiotemporally distribute growth factors to cells. This allows for the controlled release of each growth factor, ultimately enhancing the formation of a vascular network. Our engineered tissue constructs (ETCs) are fabricated out of gelatin methacryloyl (GelMA), which is an ideal substrate for tailored stiffness and bio-functionality, and covalently patterned with growth factor specific peptides. These peptides mimic growth factor receptors, facilitating the non-covalent binding of the growth factors to the ETC, allowing for facile uptake by the cells. We have demonstrated in the absence of cells the binding affinity of VEGF, EPO, and ANG1 to their respective peptides and the ability for each to be patterned onto a GelMA substrate. The ability to organize growth factors on an ETC provides different functionality to develop organized vascular networks. Our results demonstrated a method to incorporate biochemical cues into ETCs that enable spatial and temporal control of growth factors. Future efforts will investigate the cellular response by evaluating gene expression, quantifying angiogenic activity, and measuring the speed of growth factor consumption.

Keywords: growth factor, hydrogel, peptide, angiogenesis, vascular, patterning

Procedia PDF Downloads 135
2117 Reliable and Energy-Aware Data Forwarding under Sink-Hole Attack in Wireless Sensor Networks

Authors: Ebrahim Alrashed

Abstract:

Wireless sensor networks are vulnerable to attacks from adversaries attempting to disrupt their operations. Sink-hole attacks are a type of attack where an adversary node drops data forwarded through it and hence affecting the reliability and accuracy of the network. Since sensor nodes have limited battery power, it is essential that any solution to the sinkhole attack problem be very energy-aware. In this paper, we present a reliable and energy efficient scheme to forward data from source nodes to the base station while under sink-hole attack. The scheme also detects sink-hole attack nodes and avoid paths that includes them.

Keywords: energy-aware routing, reliability, sink-hole attack, WSN

Procedia PDF Downloads 375
2116 An Approach of Node Model TCnNet: Trellis Coded Nanonetworks on Graphene Composite Substrate

Authors: Diogo Ferreira Lima Filho, José Roberto Amazonas

Abstract:

Nanotechnology opens the door to new paradigms that introduces a variety of novel tools enabling a plethora of potential applications in the biomedical, industrial, environmental, and military fields. This work proposes an integrated node model by applying the same concepts of TCNet to networks of nanodevices where the nodes are cooperatively interconnected with a low-complexity Mealy Machine (MM) topology integrating in the same electronic system the modules necessary for independent operation in wireless sensor networks (WSNs), consisting of Rectennas (RF to DC power converters), Code Generators based on Finite State Machine (FSM) & Trellis Decoder and On-chip Transmit/Receive with autonomy in terms of energy sources applying the Energy Harvesting technique. This approach considers the use of a Graphene Composite Substrate (GCS) for the integrated electronic circuits meeting the following characteristics: mechanical flexibility, miniaturization, and optical transparency, besides being ecological. In addition, graphene consists of a layer of carbon atoms with the configuration of a honeycomb crystal lattice, which has attracted the attention of the scientific community due to its unique Electrical Characteristics.

Keywords: composite substrate, energy harvesting, finite state machine, graphene, nanotechnology, rectennas, wireless sensor networks

Procedia PDF Downloads 86
2115 Microwave-Assisted Chemical Pre-Treatment of Waste Sorghum Leaves: Process Optimization and Development of an Intelligent Model for Determination of Volatile Compound Fractions

Authors: Daneal Rorke, Gueguim Kana

Abstract:

The shift towards renewable energy sources for biofuel production has received increasing attention. However, the use and pre-treatment of lignocellulosic material are inundated with the generation of fermentation inhibitors which severely impact the feasibility of bioprocesses. This study reports the profiling of all volatile compounds generated during microwave assisted chemical pre-treatment of sorghum leaves. Furthermore, the optimization of reducing sugar (RS) from microwave assisted acid pre-treatment of sorghum leaves was assessed and gave a coefficient of determination (R2) of 0.76, producing an optimal RS yield of 2.74 g FS/g substrate. The development of an intelligent model to predict volatile compound fractions gave R2 values of up to 0.93 for 21 volatile compounds. Sensitivity analysis revealed that furfural and phenol exhibited high sensitivity to acid concentration, alkali concentration and S:L ratio, while phenol showed high sensitivity to microwave duration and intensity as well. These findings illustrate the potential of using an intelligent model to predict the volatile compound fraction profile of compounds generated during pre-treatment of sorghum leaves in order to establish a more robust and efficient pre-treatment regime for biofuel production.

Keywords: artificial neural networks, fermentation inhibitors, lignocellulosic pre-treatment, sorghum leaves

Procedia PDF Downloads 223
2114 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 135
2113 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform

Authors: David Jurado, Carlos Ávila

Abstract:

Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.

Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis

Procedia PDF Downloads 60
2112 Analyzing Keyword Networks for the Identification of Correlated Research Topics

Authors: Thiago M. R. Dias, Patrícia M. Dias, Gray F. Moita

Abstract:

The production and publication of scientific works have increased significantly in the last years, being the Internet the main factor of access and distribution of these works. Faced with this, there is a growing interest in understanding how scientific research has evolved, in order to explore this knowledge to encourage research groups to become more productive. Therefore, the objective of this work is to explore repositories containing data from scientific publications and to characterize keyword networks of these publications, in order to identify the most relevant keywords, and to highlight those that have the greatest impact on the network. To do this, each article in the study repository has its keywords extracted and in this way the network is  characterized, after which several metrics for social network analysis are applied for the identification of the highlighted keywords.

Keywords: bibliometrics, data analysis, extraction and data integration, scientometrics

Procedia PDF Downloads 227
2111 Data Recording for Remote Monitoring of Autonomous Vehicles

Authors: Rong-Terng Juang

Abstract:

Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.

Keywords: autonomous vehicle, data compression, remote monitoring, controller area networks (CAN), Lidar

Procedia PDF Downloads 142
2110 A Medical Resource Forecasting Model for Emergency Room Patients with Acute Hepatitis

Authors: R. J. Kuo, W. C. Cheng, W. C. Lien, T. J. Yang

Abstract:

Taiwan is a hyper endemic area for the Hepatitis B virus (HBV). The estimated total number of HBsAg carriers in the general population who are more than 20 years old is more than 3 million. Therefore, a case record review is conducted from January 2003 to June 2007 for all patients with a diagnosis of acute hepatitis who were admitted to the Emergency Department (ED) of a well-known teaching hospital. The cost for the use of medical resources is defined as the total medical fee. In this study, principal component analysis (PCA) is firstly employed to reduce the number of dimensions. Support vector regression (SVR) and artificial neural network (ANN) are then used to develop the forecasting model. A total of 117 patients meet the inclusion criteria. 61% patients involved in this study are hepatitis B related. The computational result shows that the proposed PCA-SVR model has superior performance than other compared algorithms. In conclusion, the Child-Pugh score and echogram can both be used to predict the cost of medical resources for patients with acute hepatitis in the ED.

Keywords: acute hepatitis, medical resource cost, artificial neural network, support vector regression

Procedia PDF Downloads 408
2109 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa

Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam

Abstract:

Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.

Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines

Procedia PDF Downloads 494
2108 Methodological Aspect of Emergy Accounting in Co-Production Branching Systems

Authors: Keshab Shrestha, Hung-Suck Park

Abstract:

Emergy accounting of the systems networks is guided by a definite rule called ‘emergy algebra’. The systems networks consist of two types of branching. These are the co-product branching and split branching. The emergy accounting procedure for both the branching types is different. According to the emergy algebra, each branch in the co-product branching has different transformity values whereas the split branching has the same transformity value. After the transformity value of each branch is determined, the emergy is calculated by multiplying this with the energy. The aim of this research is to solve the problems in determining the transformity values in the co-product branching through the introduction of a new methodology, the modified physical quantity method. Initially, the existing methodologies for emergy accounting in the co-product branching is discussed and later, the modified physical quantity method is introduced with a case study of the Eucalyptus pulp production. The existing emergy accounting methodologies in the co-product branching has wrong interpretations with incorrect emergy calculations. The modified physical quantity method solves those problems of emergy accounting in the co-product branching systems. The transformity value calculated for each branch is different and also applicable in the emergy calculations. The methodology also strictly follows the emergy algebra rules. This new modified physical quantity methodology is a valid approach in emergy accounting particularly in the multi-production systems networks.

Keywords: co-product branching, emergy accounting, emergy algebra, modified physical quantity method, transformity value

Procedia PDF Downloads 268
2107 Merging Appeal to Ignorance, Composition, and Division Argument Schemes with Bayesian Networks

Authors: Kong Ngai Pei

Abstract:

The argument scheme approach to argumentation has two components. One is to identify the recurrent patterns of inferences used in everyday discourse. The second is to devise critical questions to evaluate the inferences in these patterns. Although this approach is intuitive and contains many insightful ideas, it has been noted to be not free of problems. One is that due to its disavowing the probability calculus, it cannot give the exact strength of an inference. In order to tackle this problem, thereby paving the way to a more complete normative account of argument strength, it has been proposed, the most promising way is to combine the scheme-based approach with Bayesian networks (BNs). This paper pursues this line of thought, attempting to combine three common schemes, Appeal to Ignorance, Composition, and Division, with BNs. In the first part, it is argued that most (if not all) formulations of the critical questions corresponding to these schemes in the current argumentation literature are incomplete and not very informative. To remedy these flaws, more thorough and precise formulations of these questions are provided. In the second part, how to use graphical idioms (e.g. measurement and synthesis idioms) to translate the schemes as well as their corresponding critical questions to graphical structure of BNs, and how to define probability tables of the nodes using functions of various sorts are shown. In the final part, it is argued that many misuses of these schemes, traditionally called fallacies with the same names as the schemes, can indeed be adequately accounted for by the BN models proposed in this paper.

Keywords: appeal to ignorance, argument schemes, Bayesian networks, composition, division

Procedia PDF Downloads 264
2106 The Role of Planning and Memory in the Navigational Ability

Authors: Greeshma Sharma, Sushil Chandra, Vijander Singh, Alok Prakash Mittal

Abstract:

Navigational ability requires spatial representation, planning, and memory. It covers three interdependent domains, i.e. cognitive and perceptual factors, neural information processing, and variability in brain microstructure. Many attempts have been made to see the role of spatial representation in the navigational ability, and the individual differences have been identified in the neural substrate. But, there is also a need to address the influence of planning, memory on navigational ability. The present study aims to evaluate relations of aforementioned factors in the navigational ability. Total 30 participants volunteered in the study of a virtual shopping complex and subsequently were classified into good and bad navigators based on their performances. The result showed that planning ability was the most correlated factor for the navigational ability and also the discriminating factor between the good and bad navigators. There was also found the correlations between spatial memory recall and navigational ability. However, non-verbal episodic memory and spatial memory recall were also found to be correlated with the learning variable. This study attempts to identify differences between people with more and less navigational ability on the basis of planning and memory.

Keywords: memory, planning navigational ability, virtual reality

Procedia PDF Downloads 310
2105 Artificial Neural Network Based Approach in Prediction of Potential Water Pollution Across Different Land-Use Patterns

Authors: M.Rüştü Karaman, İsmail İşeri, Kadir Saltalı, A.Reşit Brohi, Ayhan Horuz, Mümin Dizman

Abstract:

Considerable relations has recently been given to the environmental hazardous caused by agricultural chemicals such as excess fertilizers. In this study, a neural network approach was investigated in the prediction of potential nitrate pollution across different land-use patterns by using a feedforward multilayered computer model of artificial neural network (ANN) with proper training. Periodical concentrations of some anions, especially nitrate (NO3-), and cations were also detected in drainage waters collected from the drain pipes placed in irrigated tomato field, unirrigated wheat field, fallow and pasture lands. The soil samples were collected from the irrigated tomato field and unirrigated wheat field on a grid system with 20 m x 20 m intervals. Site specific nitrate concentrations in the soil samples were measured for ANN based simulation of nitrate leaching potential from the land profiles. In the application of ANN model, a multi layered feedforward was evaluated, and data sets regarding with training, validation and testing containing the measured soil nitrate values were estimated based on spatial variability. As a result of the testing values, while the optimal structures of 2-15-1 was obtained (R2= 0.96, P < 0.01) for unirrigated field, the optimal structures of 2-10-1 was obtained (R2= 0.96, P < 0.01) for irrigated field. The results showed that the ANN model could be successfully used in prediction of the potential leaching levels of nitrate, based on different land use patterns. However, for the most suitable results, the model should be calibrated by training according to different NN structures depending on site specific soil parameters and varied agricultural managements.

Keywords: artificial intelligence, ANN, drainage water, nitrate pollution

Procedia PDF Downloads 287
2104 An Intelligent Thermal-Aware Task Scheduler in Multiprocessor System on a Chip

Authors: Sina Saadati

Abstract:

Multiprocessors Systems-On-Chips (MPSOCs) are used widely on modern computers to execute sophisticated software and applications. These systems include different processors for distinct aims. Most of the proposed task schedulers attempt to improve energy consumption. In some schedulers, the processor's temperature is considered to increase the system's reliability and performance. In this research, we have proposed a new method for thermal-aware task scheduling which is based on an artificial neural network (ANN). This method enables us to consider a variety of factors in the scheduling process. Some factors like ambient temperature, season (which is important for some embedded systems), speed of the processor, computing type of tasks and have a complex relationship with the final temperature of the system. This Issue can be solved using a machine learning algorithm. Another point is that our solution makes the system intelligent So that It can be adaptive. We have also shown that the computational complexity of the proposed method is cheap. As a consequence, It is also suitable for battery-powered systems.

Keywords: task scheduling, MOSOC, artificial neural network, machine learning, architecture of computers, artificial intelligence

Procedia PDF Downloads 85
2103 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 43
2102 Neural Network Approach for Solving Integral Equations

Authors: Bhavini Pandya

Abstract:

This paper considers Hη: T2 → T2 the Perturbed Cerbelli-Giona map. That is a family of 2-dimensional nonlinear area-preserving transformations on the torus T2=[0,1]×[0,1]= ℝ2/ ℤ2. A single parameter η varies between 0 and 1, taking the transformation from a hyperbolic toral automorphism to the “Cerbelli-Giona” map, a system known to exhibit multifractal properties. Here we study the multifractal properties of the family of maps. We apply a box-counting method by defining a grid of boxes Bi(δ), where i is the index and δ is the size of the boxes, to quantify the distribution of stable and unstable manifolds of the map. When the parameter is in the range 0.51< η <0.58 and 0.68< η <1 the map is ergodic; i.e., the unstable and stable manifolds eventually cover the whole torus, although not in a uniform distribution. For accurate numerical results we require correspondingly accurate construction of the stable and unstable manifolds. Here we use the piecewise linearity of the map to achieve this, by computing the endpoints of line segments which define the global stable and unstable manifolds. This allows the generalized fractal dimension Dq, and spectrum of dimensions f(α), to be computed with accuracy. Finally, the intersection of the unstable and stable manifold of the map will be investigated, and compared with the distribution of periodic points of the system.

Keywords: feed forward, gradient descent, neural network, integral equation

Procedia PDF Downloads 168
2101 Detecting Geographically Dispersed Overlay Communities Using Community Networks

Authors: Madhushi Bandara, Dharshana Kasthurirathna, Danaja Maldeniya, Mahendra Piraveenan

Abstract:

Community detection is an extremely useful technique in understanding the structure and function of a social network. Louvain algorithm, which is based on Newman-Girman modularity optimization technique, is extensively used as a computationally efficient method extract the communities in social networks. It has been suggested that the nodes that are in close geographical proximity have a higher tendency of forming communities. Variants of the Newman-Girman modularity measure such as dist-modularity try to normalize the effect of geographical proximity to extract geographically dispersed communities, at the expense of losing the information about the geographically proximate communities. In this work, we propose a method to extract geographically dispersed communities while preserving the information about the geographically proximate communities, by analyzing the ‘community network’, where the centroids of communities would be considered as network nodes. We suggest that the inter-community link strengths, which are normalized over the community sizes, may be used to identify and extract the ‘overlay communities’. The overlay communities would have relatively higher link strengths, despite being relatively apart in their spatial distribution. We apply this method to the Gowalla online social network, which contains the geographical signatures of its users, and identify the overlay communities within it.

Keywords: social networks, community detection, modularity optimization, geographically dispersed communities

Procedia PDF Downloads 219
2100 Screening of the Genes FOLH1 and MTHFR among the Mothers of Congenital Neural Tube Defected Babies in West Bengal, India

Authors: Silpita Paul, Susanta Sadhukhan, Biswanath Maity, Madhusudan Das

Abstract:

Neural tube defects (NTDs) are one of the most common forms of birth defect and affect ~300,000 new born worldwide each year. The prevalence is higher in Northern India (11 per 1000 birth) compare to southern India (5 per 1000 birth). NTDs are one of the common birth defects related with low blood folate and Hcy concentration. Though the mechanism is still unknown, but it is now established that, NTDs in human are polygenic in nature and follow the heterogeneous trait. In spite of its heterogeneity, polymorphism in few genes affects significantly the trait of NTDs. Polymorphisms in the genes FOLH1 and MTHFR plays important role in NTDs. In this study, the polymorphisms of these genes were screened by bi-directional sequencing from 30 mothers with NTD babies as case. The result revealed that 26.67% patients had bi-allelic FOLH1 polymorphism. The polymorphism has been identified as p.Y60H and frequent to cause NTDs. The study of MTHFR gene showed 2 different SNPs rs1801131 (at exon 4) and rs1801131 (at exon 7). The study showed 6.67% patients of both mono- and bi-allelic MTHFR-rs1801131 polymorphism and 6.67% patients of bi-allelic MTHFR-rs1801131 polymorphism. These polymorphisms has been responsible for p.A222V and p.E429A change respectively and frequently involved in NTD formation. Those polymorphisms affect mainly the absorption of dietary folate from intestine and the formation of 5-methylenetetrahydrofolate (5 MTHF) from 5,10-methylenetetrahydrofolate (5,10- MTHF), which is the functional folate form in our system. Though the study is not complete yet, but these polymorphisms play crucial roles in the formation of NTDs in other world population. Based on the result till date, it can be concluded that they also play significant role in our population too as in control samples we have not found any changes.

Keywords: neural tube defects, polymorphism, FOLH1, MTHFR

Procedia PDF Downloads 288
2099 Clubhouse: A Minor Rebellion against the Algorithmic Tyranny of the Majority

Authors: Vahid Asadzadeh, Amin Ataee

Abstract:

Since the advent of social media, there has been a wave of optimism among researchers and civic activists about the influence of virtual networks on the democratization process, which has gradually waned. One of the lesser-known concerns is how to increase the possibility of hearing the voices of different minorities. According to the theory of media logic, the media, using their technological capabilities, act as a structure through which events and ideas are interpreted. Social media, through the use of the learning machine and the use of algorithms, has formed a kind of structure in which the voices of minorities and less popular topics are lost among the commotion of the trends. In fact, the recommended systems and algorithms used in social media are designed to help promote trends and make popular content more popular, and content that belongs to minorities is constantly marginalized. As social networks gradually play a more active role in politics, the possibility of freely participating in the reproduction and reinterpretation of structures in general and political structures in particular (as Laclau‎ and Mouffe had in mind‎) can be considered as criteria to democracy in action. The point is that the media logic of virtual networks is shaped by the rule and even the tyranny of the majority, and this logic does not make it possible to design a self-foundation and self-revolutionary model of democracy. In other words, today's social networks, though seemingly full of variety But they are governed by the logic of homogeneity, and they do not have the possibility of multiplicity as is the case in immanent radical democracies (influenced by Gilles Deleuze). However, with the emergence and increasing popularity of Clubhouse as a new social media, there seems to be a shift in the social media space, and that is the diminishing role of algorithms and systems reconditioners as content delivery interfaces. This has led to the fact that in the Clubhouse, the voices of minorities are better heard, and the diversity of political tendencies manifests itself better. The purpose of this article is to show, first, how social networks serve the elimination of minorities in general, and second, to argue that the media logic of social networks must adapt to new interpretations of democracy that give more space to minorities and human rights. Finally, this article will show how the Clubhouse serves the new interpretations of democracy at least in a minimal way. To achieve the mentioned goals, in this article by a descriptive-analytical method, first, the relation between media logic and postmodern democracy will be inquired. The political economy popularity in social media and its conflict with democracy will be discussed. Finally, it will be explored how the Clubhouse provides a new horizon for the concepts embodied in radical democracy, a horizon that more effectively serves the rights of minorities and human rights in general.

Keywords: algorithmic tyranny, Clubhouse, minority rights, radical democracy, social media

Procedia PDF Downloads 132
2098 Applying Renowned Energy Simulation Engines to Neural Control System of Double Skin Façade

Authors: Zdravko Eškinja, Lovre Miljanić, Ognjen Kuljača

Abstract:

This paper is an overview of simulation tools used to model specific thermal dynamics that occurs while controlling double skin façade. Research has been conducted on simplified construction with single zone where one side is glazed. Heat flow and temperature responses are simulated in three different simulation tools: IDA-ICE, EnergyPlus and HAMBASE. The excitation of observed system, used in all simulations, was a temperature step of exterior environment. Air infiltration, insulation and other disturbances are excluded from this research. Although such isolated behaviour is not possible in reality, experiments are carried out to gain novel information about heat flow transients which are not observable under regular conditions. Results revealed new possibilities for adapting the parameters of the neural network regulator. Along numerical simulations, the same set-up has been also tested in a real-time experiment with a 1:18 scaled model and thermal chamber. The comparison analysis brings out interesting conclusion about simulation accuracy in this particular case.

Keywords: double skin façade, experimental tests, heat control, heat flow, simulated tests, simulation tools

Procedia PDF Downloads 217
2097 Increasing of Resiliency by Using Gas Storage in Iranian Gas Network

Authors: Mohsen Dourandish

Abstract:

Iran has a huge pipeline network in every state of country which is the longest and vastest pipeline network after Russia and USA (360,000 Km high pressure pipelines and 250,000 Km distribution networks). Furthermore in recent years National Iranian Gas Company is planning to develop natural gas network to cover all cities and villages above 20 families, in a way that 97 percent of Iran population will be gas consumer by 2020. In this condition, network resiliency will be the first priority of NIGC and due to that several planning for increasing resiliency of gas network is under construction. The most important strategy of NIGC is converting tree form pattern network to loop gas networks and developing underground gas storage near main gas consuming centers. In this regard NIGC is planning for construction of over 3500 km high-pressure pipeline and also 10 TCM gas storage capacities in UGSs.

Keywords: Iranian gas network, peak shaving, resiliency, underground gas storage

Procedia PDF Downloads 304