Search results for: optimal reaction network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9789

Search results for: optimal reaction network

8799 Social Network Analysis, Social Power in Water Co-Management (Case Study: Iran, Shemiranat, Jirood Village)

Authors: Fariba Ebrahimi, Mehdi Ghorbani, Ali Salajegheh

Abstract:

Comprehensively water management considers economic, environmental, technical and social and also sustainability of water resources for future generations. Grassland management implies cooperative approach and involves all stakeholders and also introduces issues to managers, decision and policy makers. Solving these issues needs integrated and system approach. According to the recognition of actors or key persons in necessary to apply cooperative management of Water. Therefore, based on stakeholder analysis and social network analysis can be used to demonstrate the most effective actors for environmental decisions. In this research, social powers according are specified to social network approach at Water utilizers’ level of Natural in Jirood catchment of Latian basin. In this paper, utilizers of water resources were recognized using field trips and then, trust and collaboration matrix produced using questionnaires. In the next step, degree centrality index were Examined. Finally, geometric position of each actor was illustrated in the network. The results of the research based on centrality index have a key role in recognition of cooperative management of Water in Jirood and also will help managers and planners of water in the case of recognition of social powers in order to organization and implementation of sustainable management of Water.

Keywords: social network analysis, water co-management, social power, centrality index, local stakeholders network, Jirood catchment

Procedia PDF Downloads 372
8798 Proposed Framework based on Classification of Vertical Handover Decision Strategies in Heterogeneous Wireless Networks

Authors: Shidrokh Goudarzi, Wan Haslina Hassan

Abstract:

Heterogeneous wireless networks are converging towards an all-IP network as part of the so-called next-generation network. In this paradigm, different access technologies need to be interconnected; thus, vertical handovers or vertical handoffs are necessary for seamless mobility. In this paper, we conduct a review of existing vertical handover decision-making mechanisms that aim to provide ubiquitous connectivity to mobile users. To offer a systematic comparison, we categorize these vertical handover measurement and decision structures based on their respective methodology and parameters. Subsequently, we analyze several vertical handover approaches in the literature and compare them according to their advantages and weaknesses. The paper compares the algorithms based on the network selection methods, complexity of the technologies used and efficiency in order to introduce our vertical handover decision framework. We find that vertical handovers on heterogeneous wireless networks suffer from the lack of a standard and efficient method to satisfy both user and network quality of service requirements at different levels including architectural, decision-making and protocols. Also, the consolidation of network terminal, cross-layer information, multi packet casting and intelligent network selection algorithm appears to be an optimum solution for achieving seamless service continuity in order to facilitate seamless connectivity.

Keywords: heterogeneous wireless networks, vertical handovers, vertical handover metric, decision-making algorithms

Procedia PDF Downloads 393
8797 The Fibonacci Network: A Simple Alternative for Positional Encoding

Authors: Yair Bleiberg, Michael Werman

Abstract:

Coordinate-based Multi-Layer Perceptrons (MLPs) are known to have difficulty reconstructing high frequencies of the training data. A common solution to this problem is Positional Encoding (PE), which has become quite popular. However, PE has drawbacks. It has high-frequency artifacts and adds another hyper hyperparameter, just like batch normalization and dropout do. We believe that under certain circumstances, PE is not necessary, and a smarter construction of the network architecture together with a smart training method is sufficient to achieve similar results. In this paper, we show that very simple MLPs can quite easily output a frequency when given input of the half-frequency and quarter-frequency. Using this, we design a network architecture in blocks, where the input to each block is the output of the two previous blocks along with the original input. We call this a Fibonacci Network. By training each block on the corresponding frequencies of the signal, we show that Fibonacci Networks can reconstruct arbitrarily high frequencies.

Keywords: neural networks, positional encoding, high frequency intepolation, fully connected

Procedia PDF Downloads 98
8796 Developing a Model – an Application of Fuzzy Analytic Network Process Techniques for Hostels

Authors: Pin-Ju Juan, Peng-Yu Juan, Yi-Shan Chen

Abstract:

The main purpose of this paper is to present a fuzzy Analytic Network Process (ANP) model for the hostel organizational performance selection. In this article, we created 39 criteria for selecting hostel organizational performance acquired from literature's review and experts method practical investigations, and the methods of fuzzy analytic network process are used to consolidate decision-makers’ assessments about criteria weightings. Finally, we selected organizational performance of a hostel in Taiwan to determine the effectiveness of the proposed evaluation model in this paper.

Keywords: Fuzzy ANP, hostel, organizational performance, strategy management

Procedia PDF Downloads 199
8795 Detection of Keypoint in Press-Fit Curve Based on Convolutional Neural Network

Authors: Shoujia Fang, Guoqing Ding, Xin Chen

Abstract:

The quality of press-fit assembly is closely related to reliability and safety of product. The paper proposed a keypoint detection method based on convolutional neural network to improve the accuracy of keypoint detection in press-fit curve. It would provide an auxiliary basis for judging quality of press-fit assembly. The press-fit curve is a curve of press-fit force and displacement. Both force data and distance data are time-series data. Therefore, one-dimensional convolutional neural network is used to process the press-fit curve. After the obtained press-fit data is filtered, the multi-layer one-dimensional convolutional neural network is used to perform the automatic learning of press-fit curve features, and then sent to the multi-layer perceptron to finally output keypoint of the curve. We used the data of press-fit assembly equipment in the actual production process to train CNN model, and we used different data from the same equipment to evaluate the performance of detection. Compared with the existing research result, the performance of detection was significantly improved. This method can provide a reliable basis for the judgment of press-fit quality.

Keywords: keypoint detection, curve feature, convolutional neural network, press-fit assembly

Procedia PDF Downloads 228
8794 Preparation of Nb Silicide-Based Alloy Powder by Hydrogenation-Dehydrogenation (HDH) Reaction

Authors: Gi-Beom Park, Hyong-Gi Park, Seong-Yong Lee, Jaeho Choi, Seok Hong Min, Tae Kwon Ha

Abstract:

The Nb silicide-based alloy has the excellent high-temperature strength and relatively lower density than the Ni-based superalloy; therefore, it has been receiving a lot of attention for the next generation high-temperature material. To enhance the high temperature creep property and oxidation resistance, Si was added to the Nb-based alloy, resulting in a multi-phase microstructure with metal solid solution and silicide phase. Since the silicide phase has a low machinability due to its brittle nature, it is necessary to fabricate components using the powder metallurgy. However, powder manufacturing techniques for the alloys have not yet been developed. In this study, we tried to fabricate Nb-based alloy powder by the hydrogenation-dehydrogenation reaction. The Nb-based alloy ingot was prepared by vacuum arc melting and it was annealed in the hydrogen atmosphere for the hydrogenation. After annealing, the hydrogen concentration was increased from 0.004wt% to 1.22wt% and Nb metal phase was transformed to Nb hydride phase. The alloy after hydrogenation could be easily pulverized into powder by ball milling due to its brittleness. For dehydrogenation, the alloy powders were annealed in the vacuum atmosphere. After vacuum annealing, the hydrogen concentration was decreased to 0.003wt% and Nb hydride phase was transformed back to Nb metal phase.

Keywords: Nb alloy, Nb metal and silicide composite, powder, hydrogenation-dehydrogenation reaction

Procedia PDF Downloads 244
8793 Green, Smooth and Easy Electrochemical Synthesis of N-Protected Indole Derivatives

Authors: Sarah Fahad Alajmi, Tamer Ezzat Youssef

Abstract:

Here, we report a simple method for the direct conversion of 6-Nitro-1H-indole into N-substituted indoles via electrochemical dehydrogenative reaction with halogenated reagents under strongly basic conditions through N–R bond formation. The N-protected indoles have been prepared under moderate and scalable electrolytic conditions. The conduct of the reactions was performed in a simple divided cell under constant current without oxidizing reagents or transition-metal catalysts. The synthesized products have been characterized via UV/Vis spectrophotometry, 1H-NMR, and FTIR spectroscopy. A possible reaction mechanism is discussed based on the N-protective products. This methodology could be applied to the synthesis of various biologically active N-substituted indole derivatives.

Keywords: green chemistry, 1H-indole, heteroaromatic, organic electrosynthesis

Procedia PDF Downloads 161
8792 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 416
8791 Packet Fragmentation Caused by Encryption and Using It as a Security Method

Authors: Said Rabah Azzam, Andrew Graham

Abstract:

Fragmentation of packets caused by encryption applied on the network layer of the IOS model in Internet Protocol version 4 (IPv4) networks as well as the possibility of using fragmentation and Access Control Lists (ACLs) as a method of restricting network access to certain hosts or areas of a network.Using default settings, fragmentation is expected to occur and each fragment to be reassembled at the other end. If this does not occur then a high number of ICMP messages should be generated back towards the source host indicating that the packet is too large and that it needs to be made smaller. This result is also expected when the MTU is changed for certain links between devices.When using ACLs and packet fragments to restrict access to hosts or network segments it is possible that ACLs cannot be set up in this way. If ACLs cannot be setup to allow only fragments then it is a limitation of the hardware’s firmware holding back this particular method. If the ACL on the restricted switch can be set up in such a way to allow only fragments then a connection that forces packets to fragment should be allowed to pass through the ACL. This should then make a network connection to the destination machine allowing data to be sent to and from the destination machine. ICMP messages from the restricted access switch and host should also be blocked from being sent back across the link which will be shown in an SSH session into the switch.

Keywords: fragmentation, encryption, security, switch

Procedia PDF Downloads 334
8790 Analysis on the Copyright Protection Dilemma of Webcast in 'Internet Plus' Era

Authors: Yi Yang

Abstract:

In the era of 'Internet plus', the rapid development of webcast has posed new challenges to the intellectual property law. Meanwhile, traditional copyright protection has also exposed the existing theoretical imbalance in webcast. Through the analysis of the outstanding problems in the copyright protection of the network live broadcast, this paper points out that the main causes of the problems are the unclear nature of the copyright of the network live broadcast, the copyright protection system of the game network live broadcast has not yet been constructed, and the copyright infringement of the pan entertainment live broadcast is mostly, and so on. Based on the current practice, this paper puts forward the specific thinking of the protection path of online live broadcast copyright. First of all, to provide a reasonable judicial solution for a large number of online live copyright cases, we need to integrate the right scope and regulatory behavior of broadcasting right and information network communication right. Secondly, in order to protect the rights of network anchors, the webcast should be regarded as works. Thirdly, in order to protect the copyright of webcast and prevent the infringement of copyright by webcast, the webcast platform will be used as an intermediary to provide solutions for solving the judicial dilemma. In the era of 'Internet plus', it is a theoretical attempt to explore the protection and method of copyright protection on webcast, which has positive guiding significance for judicial practice.

Keywords: 'Internet Plus' era, webcast, copyright, protection dilemma

Procedia PDF Downloads 113
8789 Optimal Utilization of Space in a Warehouse: A Case Study

Authors: Arun Kumar R. K. Gothra, Hasan Alhakamy

Abstract:

With increasing expectations and demands for warehousing and distribution, Warehouse Solution Incorporated in Victoria has been looking at ways to improve on its business processes to maintain the competitive edge. To maintain the provision of high quality service standards at competitive and affordable prices, improvements in the logistics management are necessary. One such avenue is to make efficient use of space available in the warehouse. This paper is based on a study of the collaboration of Warehouse Solution Inc with Dandenong Distribution Centre (DDC) to solve congestion problem and enhance efficiency of the whole warehouse activities.

Keywords: space optimization, optimal utilization, warehouse, DDC

Procedia PDF Downloads 610
8788 Reactive Transport Modeling in Carbonate Rocks: A Single Pore Model

Authors: Priyanka Agrawal, Janou Koskamp, Amir Raoof, Mariette Wolthers

Abstract:

Calcite is the main mineral found in carbonate rocks, which form significant hydrocarbon reservoirs and subsurface repositories for CO2 sequestration. The injected CO2 mixes with the reservoir fluid and disturbs the geochemical equilibrium, triggering calcite dissolution. Different combinations of fluid chemistry and injection rate may therefore result in different evolution of porosity, permeability and dissolution patterns. To model the changes in porosity and permeability Kozeny-Carman equation K∝〖(∅)〗^n is used, where K is permeability and ∅ is porosity. The value of n is mostly based on experimental data or pore network models. In pore network models, this derivation is based on accuracy of relation used for conductivity and pore volume change. In fact, at a single pore scale, this relationship is the result of the pore shape development due to dissolution. We have prepared a new reactive transport model for a single pore which simulates the complex chemical reaction of carbonic-acid induced calcite dissolution and subsequent pore-geometry evolution at a single pore scale. We use COMSOL Multiphysics package 5.3 for the simulation. COMSOL utilizes the arbitary-Lagrangian Eulerian (ALE) method for the free-moving domain boundary. We examined the effect of flow rate on the evolution of single pore shape profiles due to calcite dissolution. We used three flow rates to cover diffusion dominated and advection-dominated transport regimes. The fluid in diffusion dominated flow (Pe number 0.037 and 0.37) becomes less reactive along the pore length and thus produced non-uniform pore shapes. However, for the advection-dominated flow (Pe number 3.75), the fast velocity of the fluid keeps the fluid relatively more reactive towards the end of the pore length, thus yielding uniform pore shape. Different pore shapes in terms of inlet opening vs overall pore opening will have an impact on the relation between changing volumes and conductivity. We have related the shape of pore with the Pe number which controls the transport regimes. For every Pe number, we have derived the relation between conductivity and porosity. These relations will be used in the pore network model to get the porosity and permeability variation.

Keywords: single pore, reactive transport, calcite system, moving boundary

Procedia PDF Downloads 374
8787 Learning Dynamic Representations of Nodes in Temporally Variant Graphs

Authors: Sandra Mitrovic, Gaurav Singh

Abstract:

In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.

Keywords: churn prediction, dynamic networks, node2vec, auto-encoders

Procedia PDF Downloads 314
8786 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 232
8785 The Influence of Reaction Parameters on Magnetic Properties of Synthesized Strontium Ferrite

Authors: M. Bahgat, F. M. Awan, H. A. Hanafy

Abstract:

The conventional ceramic route was utilized to prepare a hard magnetic powder (M-type strontium ferrite, SrFe12O19). The stoichiometric mixture of iron oxide and strontium carbonate were calcined at 1000°C and then fired at various temperatures. The influence of various reaction parameters such as mixing ratio, calcination temperature, firing temperature and firing time on the magnetic behaviors of the synthesized magnetic powder were investigated.The magnetic properties including Coercivity (Hc), Magnetic saturation (Ms), and Magnetic remnance (Mr) were measured by vibrating sample magnetometer. Morphologically the produced magnetic powder has a dense hexagonal grain shape structure.

Keywords: hard magnetic materials, ceramic route, strontium ferrite, magnetic properties

Procedia PDF Downloads 693
8784 Sono- and Photocatalytic Degradation of Indigocarmine in Water Using ZnO

Authors: V. Veena, Suguna Yesodharan, E. P. Yesodharan

Abstract:

Two Advanced Oxidation Processes (AOP) i.e., sono- and photo-catalysis mediated by semiconductor oxide catalyst, ZnO has been found effective for the removal of trace amounts of the toxic dye pollutant Indigocarmine (IC) from water. The effect of various reaction parameters such as concentration of the dye, catalyst dosage, temperature, pH, dissolved oxygen etc. as well as the addition of oxidisers and presence of salts in water on the rate of degradation has been evaluated and optimised. The degradation follows variable kinetics depending on the concentration of the substrate, the order of reaction varying from 1 to 0 with increase in concentration. The reaction proceeds through a number of intermediates and many of them have been identified using GCMS technique. The intermediates do not affect the rate of degradation significantly. The influence of anions such as chloride, sulphate, fluoride, carbonate, bicarbonate, phosphate etc. on the degradation of IC is not consistent and does not follow any predictable pattern. Phosphates and fluorides inhibit the degradation while chloride, sulphate, carbonate and bicarbonate enhance. Adsorption studies of the dye in the absence as well as presence of these anions show that there may not be any direct correlation between the adsorption of the dye on the catalyst and the degradation. Oxidants such as hydrogen peroxide and persulphate enhance the degradation though the combined effect and it is less than the cumulative effect of individual components. COD measurements show that the degradation proceeds to complete mineralisation. The results will be presented and probable mechanism for the degradation will be discussed.

Keywords: AOP, COD, indigocarmine, photocatalysis, sonocatalysis

Procedia PDF Downloads 336
8783 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL

Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson

Abstract:

The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.

Keywords: PCR, optimisation, microfluidics, COMSOL

Procedia PDF Downloads 161
8782 Fast Adjustable Threshold for Uniform Neural Network Quantization

Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev

Abstract:

The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.

Keywords: distillation, machine learning, neural networks, quantization

Procedia PDF Downloads 325
8781 H2 Permeation Properties of a Catalytic Membrane Reactor in Methane Steam Reforming Reaction

Authors: M. Amanipour, J. Towfighi, E. Ganji Babakhani, M. Heidari

Abstract:

Cylindrical alumina microfiltration membrane (GMITM Corporation, inside diameter=9 mm, outside diameter=13 mm, length= 50 mm) with an average pore size of 0.5 micrometer and porosity of about 0.35 was used as the support for membrane reactor. This support was soaked in boehmite sols, and the mean particle size was adjusted in the range of 50 to 500 nm by carefully controlling hydrolysis time, and calcined at 650 °C for two hours. This process was repeated with different boehmite solutions in order to achieve an intermediate layer with an average pore size of about 50 nm. The resulting substrate was then coated with a thin and dense layer of silica by counter current chemical vapour deposition (CVD) method. A boehmite sol with 10 wt.% of nickel which was prepared by a standard procedure was used to make the catalytic layer. BET, SEM, and XRD analysis were used to characterize this layer. The catalytic membrane reactor was placed in an experimental setup to evaluate the permeation and hydrogen separation performance for a steam reforming reaction. The setup consisted of a tubular module in which the membrane was fixed, and the reforming reaction occurred at the inner side of the membrane. Methane stream, diluted with nitrogen, and deionized water with a steam to carbon (S/C) ratio of 3.0 entered the reactor after the reactor was heated up to 500 °C with a specified rate of 2 °C/ min and the catalytic layer was reduced at presence of hydrogen for 2.5 hours. Nitrogen flow was used as sweep gas through the outer side of the reactor. Any liquid produced was trapped and separated at reactor exit by a cold trap, and the produced gases were analyzed by an on-line gas chromatograph (Agilent 7890A) to measure total CH4 conversion and H2 permeation. BET analysis indicated uniform size distribution for catalyst with average pore size of 280 nm and average surface area of 275 m2.g-1. Single-component permeation tests were carried out for hydrogen, methane, and carbon dioxide at temperature range of 500-800 °C, and the results showed almost the same permeance and hydrogen selectivity values for hydrogen as the composite membrane without catalytic layer. Performance of the catalytic membrane was evaluated by applying membranes as a membrane reactor for methane steam reforming reaction at gas hourly space velocity (GHSV) of 10,000 h−1 and 2 bar. CH4 conversion increased from 50% to 85% with increasing reaction temperature from 600 °C to 750 °C, which is sufficiently above equilibrium curve at reaction conditions, but slightly lower than membrane reactor with packed nickel catalytic bed because of its higher surface area compared to the catalytic layer.

Keywords: catalytic membrane, hydrogen, methane steam reforming, permeance

Procedia PDF Downloads 256
8780 Sustainable Design of Coastal Bridge Networks in the Presence of Multiple Flood and Earthquake Risks

Authors: Riyadh Alsultani, Ali Majdi

Abstract:

It is necessary to develop a design methodology that includes the possibility of seismic events occurring in a region, the vulnerability of the civil hydraulic structure, and the effects of the occurrence hazard on society, environment, and economy in order to evaluate the flood and earthquake risks of coastal bridge networks. This paper presents a design approach for the assessment of the risk and sustainability of coastal bridge networks under time-variant flood-earthquake conditions. The social, environmental, and economic indicators of the network are used to measure its sustainability. These consist of anticipated loss, downtime, energy waste, and carbon dioxide emissions. The design process takes into account the possibility of happening in a set of flood and earthquake scenarios that represent the local seismic activity. Based on the performance of each bridge as determined by fragility assessments, network linkages are measured. The network's connections and bridges' damage statuses after an earthquake scenario determine the network's sustainability and danger. The sustainability measures' temporal volatility and the danger of structural degradation are both highlighted. The method is shown using a transportation network in Baghdad, Iraq.

Keywords: sustainability, Coastal bridge networks, flood-earthquake risk, structural design

Procedia PDF Downloads 93
8779 A Comparative and Critical Analysis of Some Routing Protocols in Wireless Sensor Networks

Authors: Ishtiaq Wahid, Masood Ahmad, Nighat Ayub, Sajad Ali

Abstract:

Lifetime of a wireless sensor network (WSN) is directly proportional to the energy consumption of its constituent nodes. Routing in wireless sensor network is very challenging due its inherit characteristics. In hierarchal routing the sensor filed is divided into clusters. The cluster-heads are selected from each cluster, which forms a hierarchy of nodes. The cluster-heads are used to transmit the data to the base station while other nodes perform the sensing task. In this way the lifetime of the network is increased. In this paper a comparative study of hierarchal routing protocols are conducted. The simulation is done in NS-2 for validation.

Keywords: WSN, cluster, routing, sensor networks

Procedia PDF Downloads 478
8778 A Reinforcement Learning Approach for Evaluation of Real-Time Disaster Relief Demand and Network Condition

Authors: Ali Nadi, Ali Edrissi

Abstract:

Relief demand and transportation links availability is the essential information that is needed for every natural disaster operation. This information is not in hand once a disaster strikes. Relief demand and network condition has been evaluated based on prediction method in related works. Nevertheless, prediction seems to be over or under estimated due to uncertainties and may lead to a failure operation. Therefore, in this paper a stochastic programming model is proposed to evaluate real-time relief demand and network condition at the onset of a natural disaster. To address the time sensitivity of the emergency response, the proposed model uses reinforcement learning for optimization of the total relief assessment time. The proposed model is tested on a real size network problem. The simulation results indicate that the proposed model performs well in the case of collecting real-time information.

Keywords: disaster management, real-time demand, reinforcement learning, relief demand

Procedia PDF Downloads 316
8777 An Entropy Based Novel Algorithm for Internal Attack Detection in Wireless Sensor Network

Authors: Muhammad R. Ahmed, Mohammed Aseeri

Abstract:

Wireless Sensor Network (WSN) consists of low-cost and multi functional resources constrain nodes that communicate at short distances through wireless links. It is open media and underpinned by an application driven technology for information gathering and processing. It can be used for many different applications range from military implementation in the battlefield, environmental monitoring, health sector as well as emergency response of surveillance. With its nature and application scenario, security of WSN had drawn a great attention. It is known to be valuable to variety of attacks for the construction of nodes and distributed network infrastructure. In order to ensure its functionality especially in malicious environments, security mechanisms are essential. Malicious or internal attacker has gained prominence and poses the most challenging attacks to WSN. Many works have been done to secure WSN from internal attacks but most of it relay on either training data set or predefined threshold. Without a fixed security infrastructure a WSN needs to find the internal attacks is a challenge. In this paper we present an internal attack detection method based on maximum entropy model. The final experimental works showed that the proposed algorithm does work well at the designed level.

Keywords: internal attack, wireless sensor network, network security, entropy

Procedia PDF Downloads 455
8776 Preparation, Structure, and Properties of Hydroxyl Containing Acrylate Monomer Grafted Silk Fabrics by HRP-Catalyzed ATRP Method

Authors: Tieling Xing, Jinqiu Yang, Guoqiang Chen

Abstract:

It is environmentally friendly to use horseradish peroxidase (HRP) instead of the traditional transition metal catalyst for the catalyst of atom transfer radical polymerization (ATRP). Silk fabrics were successfully grafted with hydroxyl-containing acrylate monomer to improve its crease resistance by HRP-catalyzed ATRP method. Taking grafting yield as the evaluation index, single factor tests revealed that the optimum grafting reaction condition was as follow: monomer mass fraction 120-210%(o.w.f), HRP concentration 360-480U/mL, molar ratio of HRP to NaAsc 1:150, reaction temperature 50-60℃, reaction time 24h. Raman spectra showed hydroxyl-containing acrylate monomer were successfully grafted on silk fabrics. SEM figures indicated the surface of grafted silk became rougher, and graft copolymer was distributed evenly on the surface of silk fiber. The crease-resistant recovery property of grafted silk fabric was greatly improved, especially in wet crease recovery angle. The result showed hydroxyl-containing acrylate monomer can be successfully grafted onto silk fabric based on HRP-catalyzed ATRP method.

Keywords: atom transfer radical polymerization, catalysis, horseradish peroxidase, hydroxyl-containing acrylate monomer

Procedia PDF Downloads 151
8775 Facile Synthesis and Structure Characterization of Europium (III) Tungstate Nanoparticles

Authors: Mehdi Rahimi-Nasrabadi, Seied Mahdi Pourmortazavi

Abstract:

Taguchi robust design as a statistical method was applied for optimization of the process parameters in order to tunable, simple and fast synthesis of europium (III) tungstate nanoparticles. Europium (III) tungstate nanoparticles were synthesized by a chemical precipitation reaction involving direct addition of europium ion aqueous solution to the tungstate reagent solved in aqueous media. Effects of some synthesis procedure variables i.e., europium and tungstate concentrations, flow rate of cation reagent addition, and temperature of reaction reactor on the particle size of europium (III) tungstate nanoparticles were studied experimentally in order to tune particle size of europium (III) tungstate. Analysis of variance shows the importance of controlling tungstate concentration, cation feeding flow rate and temperature for preparation of europium (III) tungstate nanoparticles by the proposed chemical precipitation reaction. Finally, europium (III) tungstate nanoparticles were synthesized at the optimum conditions of the proposed method and the morphology and chemical composition of the prepared nano-material were characterized by means of X-Ray diffraction, scanning electron microscopy, transmission electron microscopy, FT-IR spectroscopy, and fluorescence.

Keywords: europium (III) tungstate, nano-material, particle size control, procedure optimization

Procedia PDF Downloads 395
8774 Analyzing the Impact of Global Financial Crisis on Interconnectedness of Asian Stock Markets Using Network Science

Authors: Jitendra Aswani

Abstract:

In the first section of this study, impact of Global Financial Crisis (GFC) on the synchronization of fourteen Asian Stock Markets (ASM’s) of countries like Hong Kong, India, Thailand, Singapore, Taiwan, Pakistan, Bangladesh, South Korea, Malaysia, Indonesia, Japan, China, Philippines and Sri Lanka, has been analysed using the network science and its metrics like degree of node, clustering coefficient and network density. Then in the second section of this study by introducing the US stock market in existing network and developing a Minimum Spanning Tree (MST) spread of crisis from the US stock market to Asian Stock Markets (ASM) has been explained. Data used for this study is adjusted the closing price of these indices from 6th January, 2000 to 15th September, 2013 which further divided into three sub-periods: Pre, during and post-crisis. Using network analysis, it is found that Asian stock markets become more interdependent during the crisis than pre and post crisis, and also Hong Kong, India, South Korea and Japan are systemic important stock markets in the Asian region. Therefore, failure or shock to any of these systemic important stock markets can cause contagion to another stock market of this region. This study is useful for global investors’ in portfolio management especially during the crisis period and also for policy makers in formulating the financial regulation norms by knowing the connections between the stock markets and how the system of these stock markets changes in crisis period and after that.

Keywords: global financial crisis, Asian stock markets, network science, Kruskal algorithm

Procedia PDF Downloads 424
8773 Production Radionuclide Therapy 161-Terbium Using by Talys1.6 and Empire 3.2 Codes in Reactions Cyclotron

Authors: Shohreh Rahimi Lascokalayeh, Hasan Yousefnia, Mojtaba Tajik, Samaneh Zolghadri, Bentehoda Abdolhosseini

Abstract:

In this study, the production of terbium-161 as new therapeutic radionuclide was investigated using TALYS1.6& EMPIRE 3.2 codes. For this purpose, cross section for the reactions reactor to produce 161Tb were extracted by mean of this code In the following step, stopping power of the reactions reactor was calculated by SRIM code. The best reaction in the production of 161Tb is160 Gd(d,n)161Tb Production yield of the 161Tb was obtained by utilization of MATLAB calculation code and based on the charged particle reaction formalism.The results showed that Production yield of the 161Tb was obtained 0.8 (mci/ A*h).

Keywords: terbium161, TALYS1.6, EMPIRE3.2, yield, cross-section

Procedia PDF Downloads 451
8772 A t-SNE and UMAP Based Neural Network Image Classification Algorithm

Authors: Shelby Simpson, William Stanley, Namir Naba, Xiaodi Wang

Abstract:

Both t-SNE and UMAP are brand new state of art tools to predominantly preserve the local structure that is to group neighboring data points together, which indeed provides a very informative visualization of heterogeneity in our data. In this research, we develop a t-SNE and UMAP base neural network image classification algorithm to embed the original dataset to a corresponding low dimensional dataset as a preprocessing step, then use this embedded database as input to our specially designed neural network classifier for image classification. We use the fashion MNIST data set, which is a labeled data set of images of clothing objects in our experiments. t-SNE and UMAP are used for dimensionality reduction of the data set and thus produce low dimensional embeddings. Furthermore, we use the embeddings from t-SNE and UMAP to feed into two neural networks. The accuracy of the models from the two neural networks is then compared to a dense neural network that does not use embedding as an input to show which model can classify the images of clothing objects more accurately.

Keywords: t-SNE, UMAP, fashion MNIST, neural networks

Procedia PDF Downloads 198
8771 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks

Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha

Abstract:

This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.

Keywords: millimetre wavebands, SHF band, SINR, cost benefit analysis, 5G

Procedia PDF Downloads 141
8770 Reaching the Goals of Routine HIV Screening Programs: Quantifying and Implementing an Effective HIV Screening System in Northern Nigeria Facilities Based on Optimal Volume Analysis

Authors: Folajinmi Oluwasina, Towolawi Adetayo, Kate Ssamula, Penninah Iutung, Daniel Reijer

Abstract:

Objective: Routine HIV screening has been promoted as an essential component of efforts to reduce incidence, morbidity, and mortality. The objectives of this study were to identify the optimal annual volume needed to realize the public health goals of HIV screening in the AIDS Healthcare Foundation supported hospitals and establish an implementation process to realize that optimal annual volume. Methods: Starting in 2011 a program was established to routinize HIV screening within communities and government hospitals. In 2016 Five-years of HIV screening data were reviewed to identify the optimal annual proportions of age-eligible patients screened to realize the public health goals of reducing new diagnoses and ending late-stage diagnosis (tracked as concurrent HIV/AIDS diagnosis). Analysis demonstrated that rates of new diagnoses level off when 42% of age-eligible patients were screened, providing a baseline for routine screening efforts; and concurrent HIV/AIDS diagnoses reached statistical zero at screening rates of 70%. Annual facility based targets were re-structured to meet these new target volumes. Restructuring efforts focused on right-sizing HIV screening programs to align and transition programs to integrated HIV screening within standard medical care and treatment. Results: Over one million patients were screened for HIV during the five years; 16, 033 new HIV diagnoses and access to care and treatment made successfully for 82 % (13,206), and concurrent diagnosis rates went from 32.26% to 25.27%. While screening rates increased by 104.7% over the 5-years, volume analysis demonstrated that rates need to further increase by 62.52% to reach desired 20% baseline and more than double to reach optimal annual screening volume. In 2011 facility targets for HIV screening were increased to reflect volume analysis, and in that third year, 12 of the 19 facilities reached or exceeded new baseline targets. Conclusions and Recommendation: Quantifying targets against routine HIV screening goals identified optimal annual screening volume and allowed facilities to scale their program size and allocate resources accordingly. The program transitioned from utilizing non-evidence based annual volume increases to establishing annual targets based on optimal volume analysis. This has allowed efforts to be evaluated on the ability to realize quantified goals related to the public health value of HIV screening. Optimal volume analysis helps to determine the size of an HIV screening program. It is a public health tool, not a tool to determine if an individual patient should receive screening.

Keywords: HIV screening, optimal volume, HIV diagnosis, routine

Procedia PDF Downloads 263