Search results for: multiuser- multi input single output
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11088

Search results for: multiuser- multi input single output

10758 An Application of the Single Equation Regression Model

Authors: S. K. Ashiquer Rahman

Abstract:

Recently, oil has become more influential in almost every economic sector as a key material. As can be seen from the news, when there are some changes in an oil price or OPEC announces a new strategy, its effect spreads to every part of the economy directly and indirectly. That’s a reason why people always observe the oil price and try to forecast the changes of it. The most important factor affecting the price is its supply which is determined by the number of wildcats drilled. Therefore, a study about the number of wellheads and other economic variables may give us some understanding of the mechanism indicated by the amount of oil supplies. In this paper, we will consider a relationship between the number of wellheads and three key factors: the price of the wellhead, domestic output, and GNP constant dollars. We also add trend variables in the models because the consumption of oil varies from time to time. Moreover, this paper will use an econometrics method to estimate parameters in the model, apply some tests to verify the result we acquire, and then conclude the model.

Keywords: price, domestic output, GNP, trend variable, wildcat activity

Procedia PDF Downloads 51
10757 Extending Image Captioning to Video Captioning Using Encoder-Decoder

Authors: Sikiru Ademola Adewale, Joe Thomas, Bolanle Hafiz Matti, Tosin Ige

Abstract:

This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness.

Keywords: decoder, encoder, many-to-many mapping, video captioning, 2-gram BLEU

Procedia PDF Downloads 93
10756 A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images

Authors: Qiang Wang, Hongyang Yu

Abstract:

Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality.

Keywords: multi-human 3D pose estimation, RGB-D images, transformer, 3D joint locations

Procedia PDF Downloads 74
10755 Microwave Single Photon Source Using Landau-Zener Transitions

Authors: Siddhi Khaire, Samarth Hawaldar, Baladitya Suri

Abstract:

As efforts towards quantum communication advance, the need for single photon sources becomes imminent. Due to the extremely low energy of a single microwave photon, efforts to build single photon sources and detectors in the microwave range are relatively recent. We plan to use a Cooper Pair Box (CPB) that has a ‘sweet-spot’ where the two energy levels have minimal separation. Moreover, these qubits have fairly large anharmonicity making them close to ideal two-level systems. If the external gate voltage of these qubits is varied rapidly while passing through the sweet-spot, due to Landau-Zener effect, the qubit can be excited almost deterministically. The rapid change of the gate control voltage through the sweet spot induces a non-adiabatic population transfer from the ground to the excited state. The qubit eventually decays into the emission line emitting a single photon. The advantage of this setup is that the qubit can be excited without any coherent microwave excitation, thereby effectively increasing the usable source efficiency due to the absence of control pulse microwave photons. Since the probability of a Landau-Zener transition can be made almost close to unity by the appropriate design of parameters, this source behaves as an on-demand source of single microwave photons. The large anharmonicity of the CPB also ensures that only one excited state is involved in the transition and multiple photon output is highly improbable. Such a system has so far not been implemented and would find many applications in the areas of quantum optics, quantum computation as well as quantum communication.

Keywords: quantum computing, quantum communication, quantum optics, superconducting qubits, flux qubit, charge qubit, microwave single photon source, quantum information processing

Procedia PDF Downloads 90
10754 Erosion Modeling of Surface Water Systems for Long Term Simulations

Authors: Devika Nair, Sean Bellairs, Ken Evans

Abstract:

Flow and erosion modeling provides an avenue for simulating the fine suspended sediment in surface water systems like streams and creeks. Fine suspended sediment is highly mobile, and many contaminants that may have been released by any sort of catchment disturbance attach themselves to these sediments. Therefore, a knowledge of fine suspended sediment transport is important in assessing contaminant transport. The CAESAR-Lisflood Landform Evolution Model, which includes a hydrologic model (TOPMODEL) and a hydraulic model (Lisflood), is being used to assess the sediment movement in tropical streams on account of a disturbance in the catchment of the creek and to determine the dynamics of sediment quantity in the creek through the years by simulating the model for future years. The accuracy of future simulations depends on the calibration and validation of the model to the past and present events. Calibration and validation of the model involve finding a combination of parameters of the model, which, when applied and simulated, gives model outputs similar to those observed for the real site scenario for corresponding input data. Calibrating the sediment output of the CAESAR-Lisflood model at the catchment level and using it for studying the equilibrium conditions of the landform is an area yet to be explored. Therefore, the aim of the study was to calibrate the CAESAR-Lisflood model and then validate it so that it could be run for future simulations to study how the landform evolves over time. To achieve this, the model was run for a rainfall event with a set of parameters, plus discharge and sediment data for the input point of the catchment, to analyze how similar the model output would behave when compared with the discharge and sediment data for the output point of the catchment. The model parameters were then adjusted until the model closely approximated the real site values of the catchment. It was then validated by running the model for a different set of events and checking that the model gave similar results to the real site values. The outcomes demonstrated that while the model can be calibrated to a greater extent for hydrology (discharge output) throughout the year, the sediment output calibration may be slightly improved by having the ability to change parameters to take into account the seasonal vegetation growth during the start and end of the wet season. This study is important to assess hydrology and sediment movement in seasonal biomes. The understanding of sediment-associated metal dispersion processes in rivers can be used in a practical way to help river basin managers more effectively control and remediate catchments affected by present and historical metal mining.

Keywords: erosion modelling, fine suspended sediments, hydrology, surface water systems

Procedia PDF Downloads 81
10753 A Multi-Objective Programming Model to Supplier Selection and Order Allocation Problem in Stochastic Environment

Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh

Abstract:

This paper aims at developing a multi-objective model for supplier selection and order allocation problem in stochastic environment, where purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. In this regard, dependent chance programming is used which maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. The abovementioned stochastic multi-objective programming problem is then transformed into a stochastic single objective programming problem using minimum deviation method. In the next step, the further problem is solved applying a genetic algorithm, which performs a simulation process in order to calculate the stochastic objective function as its fitness function. Finally, the impact of stochastic parameters on the given solution is examined via a sensitivity analysis exploiting coefficient of variation. The results show that whatever stochastic parameters have greater coefficients of variation, the value of the objective function in the stochastic single objective programming problem is deteriorated.

Keywords: supplier selection, order allocation, dependent chance programming, genetic algorithm

Procedia PDF Downloads 308
10752 An Observer-Based Direct Adaptive Fuzzy Sliding Control with Adjustable Membership Functions

Authors: Alireza Gholami, Amir H. D. Markazi

Abstract:

In this paper, an observer-based direct adaptive fuzzy sliding mode (OAFSM) algorithm is proposed. In the proposed algorithm, the zero-input dynamics of the plant could be unknown. The input connection matrix is used to combine the sliding surfaces of individual subsystems, and an adaptive fuzzy algorithm is used to estimate an equivalent sliding mode control input directly. The fuzzy membership functions, which were determined by time consuming try and error processes in previous works, are adjusted by adaptive algorithms. The other advantage of the proposed controller is that the input gain matrix is not limited to be diagonal, i.e. the plant could be over/under actuated provided that controllability and observability are preserved. An observer is constructed to directly estimate the state tracking error, and the nonlinear part of the observer is constructed by an adaptive fuzzy algorithm. The main advantage of the proposed observer is that, the measured outputs is not limited to the first entry of a canonical-form state vector. The closed-loop stability of the proposed method is proved using a Lyapunov-based approach. The proposed method is applied numerically on a multi-link robot manipulator, which verifies the performance of the closed-loop control. Moreover, the performance of the proposed algorithm is compared with some conventional control algorithms.

Keywords: adaptive algorithm, fuzzy systems, membership functions, observer

Procedia PDF Downloads 198
10751 Multi-Temporal Cloud Detection and Removal in Satellite Imagery for Land Resources Investigation

Authors: Feng Yin

Abstract:

Clouds are inevitable contaminants in optical satellite imagery, and prevent the satellite imaging systems from acquiring clear view of the earth surface. The presence of clouds in satellite imagery bring negative influences for remote sensing land resources investigation. As a consequence, detecting the locations of clouds in satellite imagery is an essential preprocessing step, and further remove the existing clouds is crucial for the application of imagery. In this paper, a multi-temporal based satellite imagery cloud detection and removal method is proposed, which will be used for large-scale land resource investigation. The proposed method is mainly composed of four steps. First, cloud masks are generated for cloud contaminated images by single temporal cloud detection based on multiple spectral features. Then, a cloud-free reference image of target areas is synthesized by weighted averaging time-series images in which cloud pixels are ignored. Thirdly, the refined cloud detection results are acquired by multi-temporal analysis based on the reference image. Finally, detected clouds are removed via multi-temporal linear regression. The results of a case application in Hubei province indicate that the proposed multi-temporal cloud detection and removal method is effective and promising for large-scale land resource investigation.

Keywords: cloud detection, cloud remove, multi-temporal imagery, land resources investigation

Procedia PDF Downloads 268
10750 The Effect of Macroeconomic Policies on Cambodia's Economy: ARDL and VECM Model

Authors: Siphat Lim

Abstract:

This study used Autoregressive Distributed Lag (ARDL) approach to cointegration. In the long-run the general price level and exchange rate have a positively significant effect on domestic output. The estimated result further revealed that fiscal stimulus help stimulate domestic output in the long-run, but not in the short-run, while monetary expansion help to stimulate output in both short-run and long-run. The result is complied with the theory which is the macroeconomic policies, fiscal and monetary policy; help to stimulate domestic output in the long-run. The estimated result of the Vector Error Correction Model (VECM) has indicated more clearly that the consumer price index has a positive effect on output with highly statistically significant. Increasing in the general price level would increase the competitiveness among producers than increase in the output. However, the exchange rate also has a positive effect and highly significant on the gross domestic product. The exchange rate depreciation might increase export since the purchasing power of foreigners has increased. More importantly, fiscal stimulus would help stimulate the domestic output in the long-run since the coefficient of government expenditure is positive. In addition, monetary expansion would also help stimulate the output and the result is highly significant. Thus, fiscal stimulus and monetary expansionary would help stimulate the domestic output in the long-run in Cambodia.

Keywords: fiscal policy, monetary policy, ARDL, VECM

Procedia PDF Downloads 426
10749 Examination of Public Hospital Unions Technical Efficiencies Using Data Envelopment Analysis and Machine Learning Techniques

Authors: Songul Cinaroglu

Abstract:

Regional planning in health has gained speed for developing countries in recent years. In Turkey, 89 different Public Hospital Unions (PHUs) were conducted based on provincial levels. In this study technical efficiencies of 89 PHUs were examined by using Data Envelopment Analysis (DEA) and machine learning techniques by dividing them into two clusters in terms of similarities of input and output indicators. Number of beds, physicians and nurses determined as input variables and number of outpatients, inpatients and surgical operations determined as output indicators. Before performing DEA, PHUs were grouped into two clusters. It is seen that the first cluster represents PHUs which have higher population, demand and service density than the others. The difference between clusters was statistically significant in terms of all study variables (p ˂ 0.001). After clustering, DEA was performed for general and for two clusters separately. It was found that 11% of PHUs were efficient in general, additionally 21% and 17% of them were efficient for the first and second clusters respectively. It is seen that PHUs, which are representing urban parts of the country and have higher population and service density, are more efficient than others. Random forest decision tree graph shows that number of inpatients is a determinative factor of efficiency of PHUs, which is a measure of service density. It is advisable for public health policy makers to use statistical learning methods in resource planning decisions to improve efficiency in health care.

Keywords: public hospital unions, efficiency, data envelopment analysis, random forest

Procedia PDF Downloads 121
10748 An Integrated Approach to Find the Effect of Strain Rate on Ultimate Tensile Strength of Randomly Oriented Short Glass Fiber Composite in Combination with Artificial Neural Network

Authors: Sharad Shrivastava, Arun Jalan

Abstract:

In this study tensile testing was performed on randomly oriented short glass fiber/epoxy resin composite specimens which were prepared using hand lay-up method. Samples were tested over a wide range of strain rate/loading rate from 2mm/min to 40mm/min to see the effect on ultimate tensile strength of the composite. A multi layered 'back propagation artificial neural network of supervised learning type' was used to analyze and predict the tensile properties with strain rate and temperature as given input and output as UTS to predict. Various network structures were designed and investigated with varying parameters and network sizes, and an optimized network structure was proposed to predict the UTS of short glass fiber/epoxy resin composite specimens with reasonably good accuracy.

Keywords: glass fiber composite, mechanical properties, strain rate, artificial neural network

Procedia PDF Downloads 436
10747 Study on Multi-Point Stretch Forming Process for Double Curved Surface

Authors: Jiwoo Park, Junseok Yoon, Jeong Kim, Beomsoo Kang

Abstract:

Multi-Point Stretch Forming (MPSF) process is suitable for flexible manufacturing, and it has several advantages including that it could be applied to various forming such as sheet metal forming, single curved surface forming and double curved one. In this study, a systematic numerical simulation was carried out for atypical double curved surface forming using the multiple die stretch forming process. In this simulation, urethane pads were defined based on hyper-elastic material model as a cushion for the smooth forming surface. The deformation behaviour on elastic recovery was also investigated to consider the exact result after the last forming process, and then the experiment was also carried out to confirm the formability of this forming process. By comparing the simulation and experiment results, the suitability of the multiple die stretch forming process for the atypical double curved surface was verified. Consequently, it is confirmed that the multi-point stretch forming process has the capability and feasibility of being used to manufacture the double curved surfaces of sheet metal.

Keywords: multi-point stretch forming, double curved surface, numerical simulation, manufacturing

Procedia PDF Downloads 474
10746 DNpro: A Deep Learning Network Approach to Predicting Protein Stability Changes Induced by Single-Site Mutations

Authors: Xiao Zhou, Jianlin Cheng

Abstract:

A single amino acid mutation can have a significant impact on the stability of protein structure. Thus, the prediction of protein stability change induced by single site mutations is critical and useful for studying protein function and structure. Here, we presented a deep learning network with the dropout technique for predicting protein stability changes upon single amino acid substitution. While using only protein sequence as input, the overall prediction accuracy of the method on a standard benchmark is >85%, which is higher than existing sequence-based methods and is comparable to the methods that use not only protein sequence but also tertiary structure, pH value and temperature. The results demonstrate that deep learning is a promising technique for protein stability prediction. The good performance of this sequence-based method makes it a valuable tool for predicting the impact of mutations on most proteins whose experimental structures are not available. Both the downloadable software package and the user-friendly web server (DNpro) that implement the method for predicting protein stability changes induced by amino acid mutations are freely available for the community to use.

Keywords: bioinformatics, deep learning, protein stability prediction, biological data mining

Procedia PDF Downloads 454
10745 Application of Deep Learning in Top Pair and Single Top Quark Production at the Large Hadron Collider

Authors: Ijaz Ahmed, Anwar Zada, Muhammad Waqas, M. U. Ashraf

Abstract:

We demonstrate the performance of a very efficient tagger applies on hadronically decaying top quark pairs as signal based on deep neural network algorithms and compares with the QCD multi-jet background events. A significant enhancement of performance in boosted top quark events is observed with our limited computing resources. We also compare modern machine learning approaches and perform a multivariate analysis of boosted top-pair as well as single top quark production through weak interaction at √s = 14 TeV proton-proton Collider. The most relevant known background processes are incorporated. Through the techniques of Boosted Decision Tree (BDT), likelihood and Multlayer Perceptron (MLP) the analysis is trained to observe the performance in comparison with the conventional cut based and count approach

Keywords: top tagger, multivariate, deep learning, LHC, single top

Procedia PDF Downloads 104
10744 Genetic Algorithm Optimization of Multiple Resources for Multi-Projects

Authors: A. Samer Ezeldin, Sarah A. Fotouh

Abstract:

Optimization of resources is very important in all fields, as in construction management. Project managers have to face problems regarding management of cost, time and available resources of single projects and more problems arise when managing multiple projects. Most of the studies focused on optimization of resources for a single project, but, this paper will discuss the design and modeling of multiple resources optimization for multiple projects using Genetic Algorithm. Most of the companies in construction industry optimize the resources for single projects only, but with the presence of several mega projects in several developing countries running at the same time, there is a need for a model to enhance the efficiency of available resources and decreases the fluctuation as much as possible. The proposed model calculates the cost of each resource, tries to minimize the cost of extra resources as much as possible and generates the schedule of each project within a selected program.

Keywords: construction management, genetic algorithm, multiple projects, multiple resources, optimization

Procedia PDF Downloads 449
10743 Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation

Authors: Lae-Jeong Park

Abstract:

The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.

Keywords: pedestrian detection, color segmentation, false positive, feature extraction

Procedia PDF Downloads 274
10742 Compact Dual-band 4-MIMO Antenna Elements for 5G Mobile Applications

Authors: Fayad Ghawbar

Abstract:

The significance of the Multiple Input Multiple Output (MIMO) system in the 5G wireless communication system is essential to enhance channel capacity and provide a high data rate resulting in a need for dual-polarization in vertical and horizontal. Furthermore, size reduction is critical in a MIMO system to deploy more antenna elements requiring a compact, low-profile design. A compact dual-band 4-MIMO antenna system has been presented in this paper with pattern and polarization diversity. The proposed single antenna structure has been designed using two antenna layers with a C shape in the front layer and a partial slot with a U-shaped cut in the ground to enhance isolation. The single antenna is printed on an FR4 dielectric substrate with an overall size of 18 mm×18 mm×1.6 mm. The 4-MIMO antenna elements were printed orthogonally on an FR4 substrate with a size dimension of 36 × 36 × 1.6 mm3 with zero edge-to-edge separation distance. The proposed compact 4-MIMO antenna elements resonate at 3.4-3.6 GHz and 4.8-5 GHz. The s-parameters measurement and simulation results agree, especially in the lower band with a slight frequency shift of the measurement results at the upper band due to fabrication imperfection. The proposed design shows isolation above -15 dB and -22 dB across the 4-MIMO elements. The MIMO diversity performance has been evaluated in terms of efficiency, ECC, DG, TARC, and CCL. The total and radiation efficiency were above 50 % across all parameters in both frequency bands. The ECC values were lower than 0.10, and the DG results were about 9.95 dB in all antenna elements. TARC results exhibited values lower than 0 dB with values lower than -25 dB in all MIMO elements at the dual-bands. Moreover, the channel capacity losses in the MIMO system were depicted using CCL with values lower than 0.4 Bits/s/Hz.

Keywords: compact antennas, MIMO antenna system, 5G communication, dual band, ECC, DG, TARC

Procedia PDF Downloads 139
10741 Effects of Inlet Filtration Pressure Loss on Single and Two-Spool Gas Turbine

Authors: Enyia James Diwa, Dodeye Ina Igbong, Archibong Archibong Eso

Abstract:

Gas turbine operators have been faced with the dramatic financial setback resulting from compressor fouling. In a highly deregulated power industry where there is stiffness in the market competition, has made it imperative to improvise means of reducing maintenance cost in other to yield maximum profit. Compressor fouling results from the deposition of contaminants in the presence of oil and moisture on the compressor blade or annulus surfaces, which leads to a loss in flow capacity and compressor efficiency. These combined effects reduce power output, increase heat rate and cause creep life reduction. This paper also contains a model of two gas turbine engines via Cranfield University software known as TURBOMATCH, which is simulation software for detecting engine fouling rate. The model engines are of different configurations and capacities, and are operating in two different modes of constant output power and turbine inlet temperature for a two and three stage filter system. The idea is to investigate the more economically viable filtration systems by gas turbine users based on performance only. It has been demonstrated in the results that the two spool engine is a little more beneficial compared to the single spool. This is as a result of a higher pressure ratio of the two spools as well as the deceleration of the high-pressure compressor and high-pressure turbine speed in a constant TET. Meanwhile, the inlet filtration system was properly designed and balanced with a well-timed and economical compressor washing regime/scheme to control compressor fouling. The different technologies of inlet air filtration and compressor washing are considered and an attempt at optimization with respect to the cost of a combination of both control measures are made.

Keywords: inlet filtration, pressure loss, single spool, two spool

Procedia PDF Downloads 317
10740 A Methodology for Automatic Diversification of Document Categories

Authors: Dasom Kim, Chen Liu, Myungsu Lim, Su-Hyeon Jeon, ByeoungKug Jeon, Kee-Young Kwahk, Namgyu Kim

Abstract:

Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we previously proposed a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. In this paper, we design a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.

Keywords: big data analysis, document classification, multi-category, text mining, topic analysis

Procedia PDF Downloads 265
10739 Optical Multicast over OBS Networks: An Approach Based on Code-Words and Tunable Decoders

Authors: Maha Sliti, Walid Abdallah, Noureddine Boudriga

Abstract:

In the frame of this work, we present an optical multicasting approach based on optical code-words. Our approach associates, in the edge node, an optical code-word to a group multicast address. In the core node, a set of tunable decoders are used to send a traffic data to multiple destinations based on the received code-word. The use of code-words, which correspond to the combination of an input port and a set of output ports, allows the implementation of an optical switching matrix. At the reception of a burst, it will be delayed in an optical memory. And, the received optical code-word is split to a set of tunable optical decoders. When it matches a configured code-word, the delayed burst is switched to a set of output ports.

Keywords: optical multicast, optical burst switching networks, optical code-words, tunable decoder, virtual optical memory

Procedia PDF Downloads 603
10738 ANN Based Simulation of PWM Scheme for Seven Phase Voltage Source Inverter Using MATLAB/Simulink

Authors: Mohammad Arif Khan

Abstract:

This paper analyzes and presents the development of Artificial Neural Network based controller of space vector modulation (ANN-SVPWM) for a seven-phase voltage source inverter. At first, the conventional method of producing sinusoidal output voltage by utilizing six active and one zero space vectors are used to synthesize the input reference, is elaborated and then new PWM scheme called Artificial Neural Network Based PWM is presented. The ANN based controller has the advantage of the very fast implementation and analyzing the algorithms and avoids the direct computation of trigonometric and non-linear functions. The ANN controller uses the individual training strategy with the fixed weight and supervised models. A computer simulation program has been developed using Matlab/Simulink together with the neural network toolbox for training the ANN-controller. A comparison of the proposed scheme with the conventional scheme is presented based on various performance indices. Extensive Simulation results are provided to validate the findings.

Keywords: space vector PWM, total harmonic distortion, seven-phase, voltage source inverter, multi-phase, artificial neural network

Procedia PDF Downloads 450
10737 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network

Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy

Abstract:

The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.

Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence

Procedia PDF Downloads 123
10736 Finding Optimal Operation Condition in a Biological Nutrient Removal Process with Balancing Effluent Quality, Economic Cost and GHG Emissions

Authors: Seungchul Lee, Minjeong Kim, Iman Janghorban Esfahani, Jeong Tai Kim, ChangKyoo Yoo

Abstract:

It is hard to maintain the effluent quality of the wastewater treatment plants (WWTPs) under with fixed types of operational control because of continuously changed influent flow rate and pollutant load. The aims of this study is development of multi-loop multi-objective control (ML-MOC) strategy in plant-wide scope targeting four objectives: 1) maximization of nutrient removal efficiency, 2) minimization of operational cost, 3) maximization of CH4 production in anaerobic digestion (AD) for CH4 reuse as a heat source and energy source, and 4) minimization of N2O gas emission to cope with global warming. First, benchmark simulation mode is modified to describe N2O dynamic in biological process, namely benchmark simulation model for greenhouse gases (BSM2G). Then, three types of single-loop proportional-integral (PI) controllers for DO controller, NO3 controller, and CH4 controller are implemented. Their optimal set-points of the controllers are found by using multi-objective genetic algorithm (MOGA). Finally, multi loop-MOC in BSM2G is implemented and evaluated in BSM2G. Compared with the reference case, the ML-MOC with the optimal set-points showed best control performances than references with improved performances of 34%, 5% and 79% of effluent quality, CH4 productivity, and N2O emission respectively, with the decrease of 65% in operational cost.

Keywords: Benchmark simulation model for greenhouse gas, multi-loop multi-objective controller, multi-objective genetic algorithm, wastewater treatment plant

Procedia PDF Downloads 497
10735 A New Approach to the Digital Implementation of Analog Controllers for a Power System Control

Authors: G. Shabib, Esam H. Abd-Elhameed, G. Magdy

Abstract:

In this paper, a comparison of discrete time PID, PSS controllers is presented through small signal stability of power system comprising of one machine connected to infinite bus system. This comparison achieved by using a new approach of discretization which converts the S-domain model of analog controllers to a Z-domain model to enhance the damping of a single machine power system. The new method utilizes the Plant Input Mapping (PIM) algorithm. The proposed algorithm is stable for any sampling rate, as well as it takes the closed loop characteristic into consideration. On the other hand, the traditional discretization methods such as Tustin’s method is produce satisfactory results only; when the sampling period is sufficiently low.

Keywords: PSS, power system stabilizer PID, proportional-integral-derivative PIM, plant input mapping

Procedia PDF Downloads 499
10734 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 238
10733 Dimensional Accuracy of CNTs/PMMA Parts and Holes Produced by Laser Cutting

Authors: A. Karimzad Ghavidel, M. Zadshakouyan

Abstract:

Laser cutting is a very common production method for cutting 2D polymeric parts. Developing of polymer composites with nano-fibers makes important their other properties like laser workability. The aim of this research is investigation of the influence different laser cutting conditions on the dimensional accuracy of parts and holes from poly methyl methacrylate (PMMA)/carbon nanotubes (CNTs) material. Experiments were carried out by considering of CNTs (in four level 0,0.5, 1 and 1.5% wt.%), laser power (60, 80, and 100 watt) and cutting speed 20, 30, and 40 mm/s as input variable factors. The results reveal that CNTs adding improves the laser workability of PMMA and the increasing of power has a significant effect on the part and hole size. The findings also show cutting speed is effective parameter on the size accuracy. Eventually, the statistical analysis of results was done, and calculated mathematical equations by the regression are presented for determining relation between input and output factor.

Keywords: dimensional accuracy, PMMA, CNTs, laser cutting

Procedia PDF Downloads 303
10732 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 235
10731 Attention-based Adaptive Convolution with Progressive Learning in Speech Enhancement

Authors: Tian Lan, Yixiang Wang, Wenxin Tai, Yilan Lyu, Zufeng Wu

Abstract:

The monaural speech enhancement task in the time-frequencydomain has a myriad of approaches, with the stacked con-volutional neural network (CNN) demonstrating superiorability in feature extraction and selection. However, usingstacked single convolutions method limits feature represen-tation capability and generalization ability. In order to solvethe aforementioned problem, we propose an attention-basedadaptive convolutional network that integrates the multi-scale convolutional operations into a operation-specific blockvia input dependent attention to adapt to complex auditoryscenes. In addition, we introduce a two-stage progressivelearning method to enlarge the receptive field without a dra-matic increase in computation burden. We conduct a series ofexperiments based on the TIMIT corpus, and the experimen-tal results prove that our proposed model is better than thestate-of-art models on all metrics.

Keywords: speech enhancement, adaptive convolu-tion, progressive learning, time-frequency domain

Procedia PDF Downloads 117
10730 Estimating Directional Shadow Prices of Air Pollutant Emissions by Transportation Modes

Authors: Huey-Kuo Chen

Abstract:

This paper applies directional marginal productivity model to study the shadow price of emissions by transportation modes in the years of 2011 and 2013 with the aim to provide a reference for policy makers to improve the emission of pollutants. One input variable (i.e., energy consumption), one desirable output variable (i.e., vehicle kilometers traveled) and three undesirable output variables (i.e., carbon dioxide, sulfur oxides and nitrogen oxides) generated by road transportation modes were used to evaluate directional marginal productivity and directional shadow price for 18 transportation modes. The results show that the directional shadow price (DSP) of SOx is much higher than CO2 and NOx. Nevertheless, the emission of CO2 is the largest among the three kinds of pollutants. To improve the air quality, the government should pay more attention to the emission of CO2 and apply the alternative solution such as promoting public transportation and subsidizing electric vehicles to reduce the use of private vehicles.

Keywords: marginal productivity, road transportation modes, shadow price, undesirable outputs

Procedia PDF Downloads 140
10729 Experimental Investigation and Optimization of Nanoparticle Mass Concentration and Heat Input of Loop Heat Pipe

Authors: P. Gunnasegaran, M. Z. Abdullah, M. Z. Yusoff, Nur Irmawati

Abstract:

This study presents experimental and optimization of nanoparticle mass concentration and heat input based on the total thermal resistance (Rth) of loop heat pipe (LHP), employed for PC-CPU cooling. In this study, silica nanoparticles (SiO2) in water with particle mass concentration ranged from 0% (pure water) to 1% is considered as the working fluid within the LHP. The experimental design and optimization is accomplished by the design of the experimental tool, Response Surface Methodology (RSM). The results show that the nanoparticle mass concentration and the heat input have a significant effect on the Rth of LHP. For a given heat input, the Rth is found to decrease with the increase of the nanoparticle mass concentration up to 0.5% and increased thereafter. It is also found that the Rth is decreased when the heat input is increased from 20W to 60W. The results are optimized with the objective of minimizing the Rt, using Design-Expert software, and the optimized nanoparticle mass concentration and heat input are 0.48% and 59.97W, respectively, the minimum thermal resistance being 2.66(ºC/W).

Keywords: loop heat pipe, nanofluid, optimization, thermal resistance

Procedia PDF Downloads 455