Search results for: radial basis function neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13029

Search results for: radial basis function neural network

12039 Easy Way of Optimal Process-Storage Network Design

Authors: Gyeongbeom Yi

Abstract:

The purpose of this study is to introduce the analytic solution for determining the optimal capacity (lot-size) of a multiproduct, multistage production and inventory system to meet the finished product demand. Reasonable decision-making about the capacity of processes and storage units is an important subject for industry. The industrial solution for this subject is to use the classical economic lot sizing method, EOQ/EPQ (Economic Order Quantity/Economic Production Quantity) model, incorporated with practical experience. However, the unrealistic material flow assumption of the EOQ/EPQ model is not suitable for chemical plant design with highly interlinked processes and storage units. This study overcomes the limitation of the classical lot sizing method developed on the basis of the single product and single stage assumption. The superstructure of the plant considered consists of a network of serially and/or parallelly interlinked processes and storage units. The processes involve chemical reactions with multiple feedstock materials and multiple products as well as mixing, splitting or transportation of materials. The objective function for optimization is minimizing the total cost composed of setup and inventory holding costs as well as the capital costs of constructing processes and storage units. A novel production and inventory analysis method, PSW (Periodic Square Wave) model, is applied. The advantage of the PSW model comes from the fact that the model provides a set of simple analytic solutions in spite of a realistic description of the material flow between processes and storage units. The resulting simple analytic solution can greatly enhance the proper and quick investment decision for plant design and operation problem confronted in diverse economic situations.

Keywords: analytic solution, optimal design, process-storage network

Procedia PDF Downloads 331
12038 MhAGCN: Multi-Head Attention Graph Convolutional Network for Web Services Classification

Authors: Bing Li, Zhi Li, Yilong Yang

Abstract:

Web classification can promote the quality of service discovery and management in the service repository. It is widely used to locate developers desired services. Although traditional classification methods based on supervised learning models can achieve classification tasks, developers need to manually mark web services, and the quality of these tags may not be enough to establish an accurate classifier for service classification. With the doubling of the number of web services, the manual tagging method has become unrealistic. In recent years, the attention mechanism has made remarkable progress in the field of deep learning, and its huge potential has been fully demonstrated in various fields. This paper designs a multi-head attention graph convolutional network (MHAGCN) service classification method, which can assign different weights to the neighborhood nodes without complicated matrix operations or relying on understanding the entire graph structure. The framework combines the advantages of the attention mechanism and graph convolutional neural network. It can classify web services through automatic feature extraction. The comprehensive experimental results on a real dataset not only show the superior performance of the proposed model over the existing models but also demonstrate its potentially good interpretability for graph analysis.

Keywords: attention mechanism, graph convolutional network, interpretability, service classification, service discovery

Procedia PDF Downloads 136
12037 A Bacterial Foraging Optimization Algorithm Applied to the Synthesis of Polyacrylamide Hydrogels

Authors: Florin Leon, Silvia Curteanu

Abstract:

The Bacterial Foraging Optimization (BFO) algorithm is inspired by the behavior of bacteria such as Escherichia coli or Myxococcus xanthus when searching for food, more precisely the chemotaxis behavior. Bacteria perceive chemical gradients in the environment, such as nutrients, and also other individual bacteria, and move toward or in the opposite direction to those signals. The application example considered as a case study consists in establishing the dependency between the reaction yield of hydrogels based on polyacrylamide and the working conditions such as time, temperature, monomer, initiator, crosslinking agent and inclusion polymer concentrations, as well as type of the polymer added. This process is modeled with a neural network which is included in an optimization procedure based on BFO. An experimental study of BFO parameters is performed. The results show that the algorithm is quite robust and can obtain good results for diverse combinations of parameter values.

Keywords: bacterial foraging, hydrogels, modeling and optimization, neural networks

Procedia PDF Downloads 153
12036 Water Body Detection and Estimation from Landsat Satellite Images Using Deep Learning

Authors: M. Devaki, K. B. Jayanthi

Abstract:

The identification of water bodies from satellite images has recently received a great deal of attention. Different methods have been developed to distinguish water bodies from various satellite images that vary in terms of time and space. Urban water identification issues body manifests in numerous applications with a great deal of certainty. There has been a sharp rise in the usage of satellite images to map natural resources, including urban water bodies and forests, during the past several years. This is because water and forest resources depend on each other so heavily that ongoing monitoring of both is essential to their sustainable management. The relevant elements from satellite pictures have been chosen using a variety of techniques, including machine learning. Then, a convolution neural network (CNN) architecture is created that can identify a superpixel as either one of two classes, one that includes water or doesn't from input data in a complex metropolitan scene. The deep learning technique, CNN, has advanced tremendously in a variety of visual-related tasks. CNN can improve classification performance by reducing the spectral-spatial regularities of the input data and extracting deep features hierarchically from raw pictures. Calculate the water body using the satellite image's resolution. Experimental results demonstrate that the suggested method outperformed conventional approaches in terms of water extraction accuracy from remote-sensing images, with an average overall accuracy of 97%.

Keywords: water body, Deep learning, satellite images, convolution neural network

Procedia PDF Downloads 89
12035 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 460
12034 The Impact of Artificial Intelligence on Agricultural Machines and Plant Nutrition

Authors: Kirolos Gerges Yakoub Gerges

Abstract:

Self-sustaining agricultural machines act in stochastic surroundings and therefore, should be capable of perceive the surroundings in real time. This notion can be done using image sensors blended with superior device learning, mainly Deep mastering. Deep convolutional neural networks excel in labeling and perceiving colour pix and since the fee of RGB-cameras is low, the hardware cost of accurate notion relies upon heavily on memory and computation power. This paper investigates the opportunity of designing lightweight convolutional neural networks for semantic segmentation (pixel clever class) with reduced hardware requirements, to allow for embedded usage in self-reliant agricultural machines. The usage of compression techniques, a lightweight convolutional neural community is designed to carry out actual-time semantic segmentation on an embedded platform. The community is skilled on two big datasets, ImageNet and Pascal Context, to apprehend as much as four hundred man or woman instructions. The 400 training are remapped into agricultural superclasses (e.g. human, animal, sky, road, area, shelterbelt and impediment) and the capacity to provide correct actual-time perception of agricultural environment is studied. The network is carried out to the case of self-sufficient grass mowing the usage of the NVIDIA Tegra X1 embedded platform. Feeding case-unique pics to the community consequences in a fully segmented map of the superclasses within the picture. As the network remains being designed and optimized, handiest a qualitative analysis of the technique is entire on the abstract submission deadline. intending this cut-off date, the finalized layout is quantitatively evaluated on 20 annotated grass mowing pictures. Light-weight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show aggressive performance on the subject of accuracy and speed. It’s miles viable to offer value-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.

Keywords: centrifuge pump, hydraulic energy, agricultural applications, irrigationaxial flux machines, axial flux applications, coreless machines, PM machinesautonomous agricultural machines, deep learning, safety, visual perception

Procedia PDF Downloads 26
12033 Borrowing Performance: A Network Connectivity Analysis of Second-Tier Cities in Turkey

Authors: Eğinç Simay Ertürk, Ferhan Gezi̇ci̇

Abstract:

The decline of large cities and the rise of second-tier cities have been observed as a global trend with significant implications for economic development and urban planning. In this context, the concepts of agglomeration shadow and borrowed size have gained importance as network externalities that affect the growth and development of surrounding areas. Istanbul, Izmir, and Ankara are Turkey's most significant metropolitan cities and play a significant role in the country's economy. The surrounding cities rely on these metropolitan cities for economic growth and development. However, the concentration of resources and investment in a single location can lead to agglomeration shadows in the surrounding areas. On the other hand, network connectivity between metropolitan and second-tier cities can result in borrowed function and performance, enabling smaller cities to access resources, investment, and knowledge they would not otherwise have access. The study hypothesizes that the network connectivity between second-tier and metropolitan cities in Turkey enables second-tier cities to increase their urban performance by borrowing size through these networks. Regression analysis will be used to identify specific network connectivity parameters most strongly associated with urban performance. Network connectivity will be measured with parameters such as transportation nodes and telecommunications infrastructure, and urban performance will be measured with an index, including parameters such as employment, education, and industry entrepreneurship, with data at the province levels. The contribution of the study lies in its research on how networking can benefit second-tier cities in Turkey.

Keywords: network connectivity, borrowed size, agglomeration shadow, secondary cities

Procedia PDF Downloads 81
12032 A Framework for Chinese Domain-Specific Distant Supervised Named Entity Recognition

Authors: Qin Long, Li Xiaoge

Abstract:

The Knowledge Graphs have now become a new form of knowledge representation. However, there is no consensus in regard to a plausible and definition of entities and relationships in the domain-specific knowledge graph. Further, in conjunction with several limitations and deficiencies, various domain-specific entities and relationships recognition approaches are far from perfect. Specifically, named entity recognition in Chinese domain is a critical task for the natural language process applications. However, a bottleneck problem with Chinese named entity recognition in new domains is the lack of annotated data. To address this challenge, a domain distant supervised named entity recognition framework is proposed. The framework is divided into two stages: first, the distant supervised corpus is generated based on the entity linking model of graph attention neural network; secondly, the generated corpus is trained as the input of the distant supervised named entity recognition model to train to obtain named entities. The link model is verified in the ccks2019 entity link corpus, and the F1 value is 2% higher than that of the benchmark method. The re-pre-trained BERT language model is added to the benchmark method, and the results show that it is more suitable for distant supervised named entity recognition tasks. Finally, it is applied in the computer field, and the results show that this framework can obtain domain named entities.

Keywords: distant named entity recognition, entity linking, knowledge graph, graph attention neural network

Procedia PDF Downloads 95
12031 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques

Authors: Gizem Eser Erdek

Abstract:

This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.

Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet

Procedia PDF Downloads 77
12030 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 113
12029 Developed CNN Model with Various Input Scale Data Evaluation for Bearing Faults Prognostics

Authors: Anas H. Aljemely, Jianping Xuan

Abstract:

Rolling bearing fault diagnosis plays a pivotal issue in the rotating machinery of modern manufacturing. In this research, a raw vibration signal and improved deep learning method for bearing fault diagnosis are proposed. The multi-dimensional scales of raw vibration signals are selected for evaluation condition monitoring system, and the deep learning process has shown its effectiveness in fault diagnosis. In the proposed method, employing an Exponential linear unit (ELU) layer in a convolutional neural network (CNN) that conducts the identical function on positive data, an exponential nonlinearity on negative inputs, and a particular convolutional operation to extract valuable features. The identification results show the improved method has achieved the highest accuracy with a 100-dimensional scale and increase the training and testing speed.

Keywords: bearing fault prognostics, developed CNN model, multiple-scale evaluation, deep learning features

Procedia PDF Downloads 210
12028 Research on Online Consumption of College Students in China with Stimulate-Organism-Reaction Driven Model

Authors: Wei Lu

Abstract:

With the development of information technology in China, network consumption is becoming more and more popular. As a special group, college students have a high degree of education and distinct opinions and personalities. In the future, the key groups of network consumption have gradually become the focus groups of network consumption. Studying college students’ online consumption behavior has important theoretical significance and practical value. Based on the Stimulus-Organism-Response (SOR) driving model and the structural equation model, this paper establishes the influencing factors model of College students’ online consumption behavior, evaluates and amends the model by using SPSS and AMOS software, analyses and determines the positive factors of marketing college students’ consumption, and provides an effective basis for guiding and promoting college student consumption.

Keywords: college students, online consumption, stimulate-organism-reaction driving model, structural equation model

Procedia PDF Downloads 153
12027 Comparison of Classical Computer Vision vs. Convolutional Neural Networks Approaches for Weed Mapping in Aerial Images

Authors: Paulo Cesar Pereira Junior, Alexandre Monteiro, Rafael da Luz Ribeiro, Antonio Carlos Sobieranski, Aldo von Wangenheim

Abstract:

In this paper, we present a comparison between convolutional neural networks and classical computer vision approaches, for the specific precision agriculture problem of weed mapping on sugarcane fields aerial images. A systematic literature review was conducted to find which computer vision methods are being used on this specific problem. The most cited methods were implemented, as well as four models of convolutional neural networks. All implemented approaches were tested using the same dataset, and their results were quantitatively and qualitatively analyzed. The obtained results were compared to a human expert made ground truth for validation. The results indicate that the convolutional neural networks present better precision and generalize better than the classical models.

Keywords: convolutional neural networks, deep learning, digital image processing, precision agriculture, semantic segmentation, unmanned aerial vehicles

Procedia PDF Downloads 260
12026 Fog Computing- Network Based Computing

Authors: Navaneeth Krishnan, Chandan N. Bhagwat, Aparajit P. Utpat

Abstract:

Cloud Computing provides us a means to upload data and use applications over the internet. As the number of devices connecting to the cloud grows, there is undue pressure on the cloud infrastructure. Fog computing or Network Based Computing or Edge Computing allows to move a part of the processing in the cloud to the network devices present along the node to the cloud. Therefore the nodes connected to the cloud have a better response time. This paper proposes a method of moving the computation from the cloud to the network by introducing an android like appstore on the networking devices.

Keywords: cloud computing, fog computing, network devices, appstore

Procedia PDF Downloads 388
12025 Time Synchronization between the eNBs in E-UTRAN under the Asymmetric IP Network

Authors: M. Kollar, A. Zieba

Abstract:

In this paper, we present a method for a time synchronization between the two eNodeBs (eNBs) in E-UTRAN (Evolved Universal Terrestrial Radio Access) network. The two eNBs are cooperating in so-called inter eNB CA (Carrier Aggregation) case and connected via asymmetrical IP network. We solve the problem by using broadcasting signals generated in E-UTRAN as synchronization signals. The results show that the time synchronization with the proposed method is possible with the error significantly less than 1 ms which is sufficient considering the time transmission interval is 1 ms in E-UTRAN. This makes this method (with low complexity) more suitable than Network Time Protocol (NTP) in the mobile applications with generated broadcasting signals where time synchronization in asymmetrical network is required.

Keywords: IP scheduled throughput, E-UTRAN, Evolved Universal Terrestrial Radio Access Network, NTP, Network Time Protocol, assymetric network, delay

Procedia PDF Downloads 361
12024 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 130
12023 Speaker Recognition Using LIRA Neural Networks

Authors: Nestor A. Garcia Fragoso, Tetyana Baydyk, Ernst Kussul

Abstract:

This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker’s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices.

Keywords: extreme learning, LIRA neural classifier, speaker identification, voice recognition

Procedia PDF Downloads 177
12022 Value Co-Creation Model for Relationships Management

Authors: Kolesnik Nadezda A.

Abstract:

The research aims to elaborate inter-organizational network relationships management model to maximize value co-creation. We propose a network management framework that requires evaluation of network partners with respect to their position and role in network; and elaboration of appropriate relationship development strategy with partners in network. Empirical research and approval is based on the case study method, including structured in-depth interviews with the companies from b2b market.

Keywords: inter-organizational networks, value co-creation, model, B2B market

Procedia PDF Downloads 456
12021 A Convolutional Neural Network-Based Model for Lassa fever Virus Prediction Using Patient Blood Smear Image

Authors: A. M. John-Otumu, M. M. Rahman, M. C. Onuoha, E. P. Ojonugwa

Abstract:

A Convolutional Neural Network (CNN) model for predicting Lassa fever was built using Python 3.8.0 programming language, alongside Keras 2.2.4 and TensorFlow 2.6.1 libraries as the development environment in order to reduce the current high risk of Lassa fever in West Africa, particularly in Nigeria. The study was prompted by some major flaws in existing conventional laboratory equipment for diagnosing Lassa fever (RT-PCR), as well as flaws in AI-based techniques that have been used for probing and prognosis of Lassa fever based on literature. There were 15,679 blood smear microscopic image datasets collected in total. The proposed model was trained on 70% of the dataset and tested on 30% of the microscopic images in avoid overfitting. A 3x3x3 convolution filter was also used in the proposed system to extract features from microscopic images. The proposed CNN-based model had a recall value of 96%, a precision value of 93%, an F1 score of 95%, and an accuracy of 94% in predicting and accurately classifying the images into clean or infected samples. Based on empirical evidence from the results of the literature consulted, the proposed model outperformed other existing AI-based techniques evaluated. If properly deployed, the model will assist physicians, medical laboratory scientists, and patients in making accurate diagnoses for Lassa fever cases, allowing the mortality rate due to the Lassa fever virus to be reduced through sound decision-making.

Keywords: artificial intelligence, ANN, blood smear, CNN, deep learning, Lassa fever

Procedia PDF Downloads 120
12020 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation

Authors: Matthias Leitner, Gernot Pottlacher

Abstract:

Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.

Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion

Procedia PDF Downloads 219
12019 Design of Data Management Software System Supporting Rendezvous and Docking with Various Spaceships

Authors: Zhan Panpan, Lu Lan, Sun Yong, He Xiongwen, Yan Dong, Gu Ming

Abstract:

The function of the two spacecraft docking network, the communication and control of a docking target with various spacecrafts is realized in the space lab data management system. In order to solve the problem of the complex data communication mode between the space lab and various spaceships, and the problem of software reuse caused by non-standard protocol, a data management software system supporting rendezvous and docking with various spaceships has been designed. The software system is based on CCSDS Spcecraft Onboard Interface Service(SOIS). It consists of Software Driver Layer, Middleware Layer and Appliaction Layer. The Software Driver Layer hides the various device interfaces using the uniform device driver framework. The Middleware Layer is divided into three lays, including transfer layer, application support layer and system business layer. The communication of space lab plaform bus and the docking bus is realized in transfer layer. Application support layer provides the inter tasks communitaion and the function of unified time management for the software system. The data management software functions are realized in system business layer, which contains telemetry management service, telecontrol management service, flight status management service, rendezvous and docking management service and so on. The Appliaction Layer accomplishes the space lab data management system defined tasks using the standard interface supplied by the Middleware Layer. On the basis of layered architecture, rendezvous and docking tasks and the rendezvous and docking management service are independent in the software system. The rendezvous and docking tasks will be activated and executed according to the different spaceships. In this way, the communication management functions in the independent flight mode, the combination mode of the manned spaceship and the combination mode of the cargo spaceship are achieved separately. The software architecture designed standard appliction interface for the services in each layer. Different requirements of the space lab can be supported by the use of standard services per layer, and the scalability and flexibility of the data management software can be effectively improved. It can also dynamically expand the number and adapt to the protocol of visiting spaceships. The software system has been applied in the data management subsystem of the space lab, and has been verified in the flight of the space lab. The research results of this paper can provide the basis for the design of the data manage system in the future space station.

Keywords: space lab, rendezvous and docking, data management, software system

Procedia PDF Downloads 368
12018 A Flexible Pareto Distribution Using α-Power Transformation

Authors: Shumaila Ehtisham

Abstract:

In Statistical Distribution Theory, considering an additional parameter to classical distributions is a usual practice. In this study, a new distribution referred to as α-Power Pareto distribution is introduced by including an extra parameter. Several properties of the proposed distribution including explicit expressions for the moment generating function, mode, quantiles, entropies and order statistics are obtained. Unknown parameters have been estimated by using maximum likelihood estimation technique. Two real datasets have been considered to examine the usefulness of the proposed distribution. It has been observed that α-Power Pareto distribution outperforms while compared to different variants of Pareto distribution on the basis of model selection criteria.

Keywords: α-power transformation, maximum likelihood estimation, moment generating function, Pareto distribution

Procedia PDF Downloads 215
12017 Modelling the Education Supply Chain with Network Data Envelopment Analysis

Authors: Sourour Ramzi, Claudia Sarrico

Abstract:

Little has been done on network DEA in education, and nobody has attempted to model the whole education supply chain using network DEA. As such the contribution of the present paper is to propose a model for measuring the efficiency of education supply chains using network DEA. First, we use a general survey of data envelopment analysis (DEA) to establish the emergent themes for research in DEA, and focus on the theme of Network DEA. Second, we use a survey on two-stage DEA models, and Network DEA to write a state of the art on Network DEA, particularly applied to supply chain management. Third, we use a survey on DEA applications to establish the most influential papers on DEA education applications, in order to establish the state of the art on applications of DEA in education, in general, and applications of DEA to education using network DEA, in particular. Finally, we propose a model for measuring the performance of education supply chains of different education systems (countries or states within a country, for instance). We then use this model on some empirical data.

Keywords: supply chain, education, data envelopment analysis, network DEA

Procedia PDF Downloads 368
12016 Calibration of the Radical Installation Limit Error of the Accelerometer in the Gravity Gradient Instrument

Authors: Danni Cong, Meiping Wu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokuncai, Hao Qin

Abstract:

Gravity gradient instrument (GGI) is the core of the gravity gradiometer, so the structural error of the sensor has a great impact on the measurement results. In order not to affect the aimed measurement accuracy, limit error is required in the installation of the accelerometer. In this paper, based on the established measuring principle model, the radial installation limit error is calibrated, which is taken as an example to provide a method to calculate the other limit error of the installation under the premise of ensuring the accuracy of the measurement result. This method provides the idea for deriving the limit error of the geometry structure of the sensor, laying the foundation for the mechanical precision design and physical design.

Keywords: gravity gradient sensor, radial installation limit error, accelerometer, uniaxial rotational modulation

Procedia PDF Downloads 422
12015 Orphan Node Inclusion Protocol for Wireless Sensor Network

Authors: Sandeep Singh Waraich

Abstract:

Wireless sensor network (WSN ) consists of a large number of sensor nodes. The disparity in their energy consumption usually lead to the loss of equilibrium in wireless sensor network which may further results in an energy hole problem in wireless network. In this paper, we have considered the inclusion of orphan nodes which usually remain unutilized as intermediate nodes in multi-hop routing. The Orphan Node Inclusion (ONI) Protocol lets the cluster member to bring the orphan nodes into their clusters, thereby saving important resources and increasing network lifetime in critical applications of WSN.

Keywords: wireless sensor network, orphan node, clustering, ONI protocol

Procedia PDF Downloads 420
12014 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 190
12013 Shear Strength and Consolidation Behavior of Clayey Soil with Vertical and Radial Drainage

Authors: R. Pillai Aparna, S. R. Gandhi

Abstract:

Soft clay deposits having low strength and high compressibility are found all over the world. Preloading with vertical drains is a widely used method for improving such type of soils. The coefficient of consolidation, irrespective of the drainage type, plays an important role in the design of vertical drains and it controls accurate prediction of the rate of consolidation of soil. Also, the increase in shear strength of soil with consolidation is another important factor considered in preloading or staged construction. To our best knowledge no clear guidelines are available to estimate the increase in shear strength for a particular degree of consolidation (U) at various stages during the construction. Various methods are available for finding out the consolidation coefficient. This study mainly focuses on the variation of, consolidation coefficient which was found out using different methods and shear strength with pressure intensity. The variation of shear strength with the degree of consolidation was also studied. The consolidation test was done using two types of highly compressible clays with vertical, radial and a few with combined drainage. The test was carried out at different pressures intensities and for each pressure intensity, once the target degree of consolidation is achieved, vane shear test was done at different locations in the sample, in order to determine the shear strength. The shear strength of clayey soils under the application of vertical stress with vertical and radial drainage with target U value of 70% and 90% was studied. It was found that there is not much variation in cv or cr value beyond 80kPa pressure intensity. Correlations were developed between shear strength ratio and consolidation pressure based on laboratory testing under controlled condition. It was observed that the shear strength of sample with target U value of 90% is about 1.4 to 2 times than that of 70% consolidated sample. Settlement analysis was done using Asaoka’s and hyperbolic method. The variation of strength with respect to the depth of sample was also studied, using large-scale consolidation test. It was found, based on the present study that the gain in strength is more on the top half of the clay layer, and also the shear strength of the sample ensuring radial drainage is slightly higher than that of the vertical drainage.

Keywords: consolidation coefficient, degree of consolidation, PVDs, shear strength

Procedia PDF Downloads 239
12012 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network

Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy

Abstract:

The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.

Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence

Procedia PDF Downloads 128
12011 Evolving Convolutional Filter Using Genetic Algorithm for Image Classification

Authors: Rujia Chen, Ajit Narayanan

Abstract:

Convolutional neural networks (CNN), as typically applied in deep learning, use layer-wise backpropagation (BP) to construct filters and kernels for feature extraction. Such filters are 2D or 3D groups of weights for constructing feature maps at subsequent layers of the CNN and are shared across the entire input. BP as a gradient descent algorithm has well-known problems of getting stuck at local optima. The use of genetic algorithms (GAs) for evolving weights between layers of standard artificial neural networks (ANNs) is a well-established area of neuroevolution. In particular, the use of crossover techniques when optimizing weights can help to overcome problems of local optima. However, the application of GAs for evolving the weights of filters and kernels in CNNs is not yet an established area of neuroevolution. In this paper, a GA-based filter development algorithm is proposed. The results of the proof-of-concept experiments described in this paper show the proposed GA algorithm can find filter weights through evolutionary techniques rather than BP learning. For some simple classification tasks like geometric shape recognition, the proposed algorithm can achieve 100% accuracy. The results for MNIST classification, while not as good as possible through standard filter learning through BP, show that filter and kernel evolution warrants further investigation as a new subarea of neuroevolution for deep architectures.

Keywords: neuroevolution, convolutional neural network, genetic algorithm, filters, kernels

Procedia PDF Downloads 186
12010 Organization of the Purchasing Function for Innovation

Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević

Abstract:

Various prominent scholars and substantial practitioner-oriented literature on innovation orientation have shown positive effects on firm performance. There is a myriad of factors that influence and enhance innovation but it has been found in the literature that new product innovations accounted for an average of 14 percent of sales revenues for all firms. If there is one thing that has changed in innovation management during the last decade, it is the growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Purchasing function in an organization is extremely important as it deals on an average of 50% or more of a firm's expenditures. In the nineties the purchasing department was largely seen as a transaction-oriented, clerical function but today purchasing integration provides a formal interface mechanism between purchasing and other firm functions that services other functions within the company. Purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work since it has been found in the literature that the higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Most researches on organization of purchasing function were done by case study analysis of innovative firms. This work actually tends to prove or discard results found in the literature based on case study method. A large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey served as a basis for analysis.

Keywords: purchasing function organization, innovation, technological risk, GMRG 4 survey

Procedia PDF Downloads 482