Search results for: looping pipe networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3080

Search results for: looping pipe networks

1580 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers

Procedia PDF Downloads 48
1579 Optimal Number and Placement of Vertical Links in 3D Network-On-Chip

Authors: Nesrine Toubaline, Djamel Bennouar, Ali Mahdoum

Abstract:

3D technology can lead to a significant reduction in power and average hop-count in Networks on Chip (NoCs). It offers short and fast vertical links which copes with the long wire problem in 2D NoCs. This work proposes heuristic-based method to optimize number and placement of vertical links to achieve specified performance goals. Experiments show that significant improvement can be achieved by using a specific number of vertical interconnect.

Keywords: interconnect optimization, monolithic inter-tier vias, network on chip, system on chip, through silicon vias, three dimensional integration circuits

Procedia PDF Downloads 283
1578 Structural Design of a Relief Valve Considering Strength

Authors: Nam-Hee Kim, Jang-Hoon Ko, Kwon-Hee Lee

Abstract:

A relief valve is a mechanical element to keep safety by controlling high pressure. Usually, the high pressure is relieved by using the spring force and letting the fluid to flow from another way out of system. When its normal pressure is reached, the relief valve can return to initial state. The relief valve in this study has been applied for pressure vessel, evaporator, piping line, etc. The relief valve should be designed for smooth operation and should satisfy the structural safety requirement under operating condition. In general, the structural analysis is performed by following fluid flow analysis. In this process, the FSI (Fluid-Structure Interaction) is required to input the force obtained from the output of the flow analysis. Firstly, this study predicts the velocity profile and the pressure distribution in the given system. In this study, the assumptions for flow analysis are as follows: • The flow is steady-state and three-dimensional. • The fluid is Newtonian and incompressible. • The walls of the pipe and valve are smooth. The flow characteristics in this relief valve does not induce any problem. The commercial software ANSYS/CFX is utilized for flow analysis. On the contrary, very high pressure may cause structural problem due to severe stress. The relief valve is made of body, bonnet, guide, piston and nozzle, and its material is stainless steel. To investigate its structural safety, the worst case loading is considered as the pressure of 700 bar. The load is applied to inside the valve, which is greater than the load obtained from FSI. The maximum stress is calculated as 378 MPa by performing the finite element analysis. However, the value is greater than its allowable value. Thus, an alternative design is suggested to improve the structural performance through case study. We found that the sensitive design variable to the strength is the shape of the nozzle. The case study is to vary the size of the nozzle. Finally, it can be seen that the suggested design satisfy the structural design requirement. The FE analysis is performed by using the commercial software ANSYS/Workbench.

Keywords: relief valve, structural analysis, structural design, strength, safety factor

Procedia PDF Downloads 289
1577 Assessment the Quality of Telecommunication Services by Fuzzy Inferences System

Authors: Oktay Nusratov, Ramin Rzaev, Aydin Goyushov

Abstract:

Fuzzy inference method based approach to the forming of modular intellectual system of assessment the quality of communication services is proposed. Developed under this approach the basic fuzzy estimation model takes into account the recommendations of the International Telecommunication Union in respect of the operation of packet switching networks based on IP-protocol. To implement the main features and functions of the fuzzy control system of quality telecommunication services it is used multilayer feedforward neural network.

Keywords: quality of communication, IP-telephony, fuzzy set, fuzzy implication, neural network

Procedia PDF Downloads 454
1576 Similitude for Thermal Scale-up of a Multiphase Thermolysis Reactor in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

The thermochemical copper-chlorine (Cu-Cl) cycle is considered as a sustainable and efficient technology for a hydrogen production, when linked with clean-energy systems such as nuclear reactors or solar thermal plants. In the Cu-Cl cycle, water is decomposed thermally into hydrogen and oxygen through a series of intermediate reactions. This paper investigates the thermal scale up analysis of the three phase oxygen production reactor in the Cu-Cl cycle, where the reaction is endothermic and the temperature is about 530 oC. The paper focuses on examining the size and number of oxygen reactors required to provide enough heat input for different rates of hydrogen production. The type of the multiphase reactor used in this paper is the continuous stirred tank reactor (CSTR) that is heated by a half pipe jacket. The thermal resistance of each section in the jacketed reactor system is studied to examine its effect on the heat balance of the reactor. It is found that the dominant contribution to the system thermal resistance is from the reactor wall. In the analysis, the Cu-Cl cycle is assumed to be driven by a nuclear reactor where two types of nuclear reactors are examined as the heat source to the oxygen reactor. These types are the CANDU Super Critical Water Reactor (CANDU-SCWR) and High Temperature Gas Reactor (HTGR). It is concluded that a better heat transfer rate has to be provided for CANDU-SCWR by 3-4 times than HTGR. The effect of the reactor aspect ratio is also examined in this paper and is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Finally, a comparison between the results of heat balance and existing results of mass balance is performed and is found that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: sustainable energy, clean energy, Cu-Cl cycle, heat transfer, hydrogen, oxygen

Procedia PDF Downloads 283
1575 Probabilistic Modeling Laser Transmitter

Authors: H. S. Kang

Abstract:

Coupled electrical and optical model for conversion of electrical energy into coherent optical energy for transmitter-receiver link by solid state device is presented. Probability distribution for travelling laser beam switching time intervals and the number of switchings in the time interval is obtained. Selector function mapping is employed to regulate optical data transmission speed. It is established that regulated laser transmission from PhotoActive Laser transmitter follows principal of invariance. This considerably simplifies design of PhotoActive Laser Transmission networks.

Keywords: computational mathematics, finite difference Markov chain methods, sequence spaces, singularly perturbed differential equations

Procedia PDF Downloads 416
1574 Competences for Learning beyond the Academic Context

Authors: Cristina Galván-Fernández

Abstract:

Students differentiate the different contexts of their lives as well as employment, hobbies or studies. In higher education is needed to transfer the experiential knowledge to theory and viceversa. However, is difficult to achieve than students use their personal experiences and social readings for get the learning evidences. In an experience with 178 education students from Chile and Spain we have used an e-portfolio system and a methodology for 4 years with the aims of help them to: 1) self-regulate their learning process and 2) use social networks and professional experiences for make the learning evidences. These two objectives have been controlled by interviews to the same students in different moments and two questionnaires. The results of this study show that students recognize the ownership of their learning and progress in planning and reflection of their own learning.

Keywords: competences, e-portfolio, higher education, self-regulation

Procedia PDF Downloads 282
1573 Reactive Analysis of Different Protocol in Mobile Ad Hoc Network

Authors: Manoj Kumar

Abstract:

Routing protocols have a central role in any mobile ad hoc network (MANET). There are many routing protocols that exhibit different performance levels in different scenarios. In this paper, we compare AODV, DSDV, DSR, and ZRP routing protocol in mobile ad hoc networks to determine the best operational conditions for each protocol. We analyze these routing protocols by extensive simulations in OPNET simulator and show how to pause time and the number of nodes affect their performance. In this study, performance is measured in terms of control traffic received, control traffic sent, data traffic received, sent data traffic, throughput, retransmission attempts.

Keywords: AODV, DSDV, DSR, ZRP

Procedia PDF Downloads 498
1572 Democracy in Gaming: An Artificial Neural Network Based Approach towards Rule Evolution

Authors: Nelvin Joseph, K. Krishna Milan Rao, Praveen Dwarakanath

Abstract:

The explosive growth of Smart phones around the world has led to the shift of the primary engagement tool for entertainment from traditional consoles and music players to an all integrated device. Augmented Reality is the next big shift in bringing in a new dimension to the play. The paper explores the construct and working of the community engine in Delta T – an Augmented Reality game that allows users to evolve rules in the game basis collective bargaining mirroring democracy even in a gaming world.

Keywords: augmented reality, artificial neural networks, mobile application, human computer interaction, community engine

Procedia PDF Downloads 308
1571 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 36
1570 The Use of Social Media in the Recruitment Process as HR Strategy

Authors: Seema Sant

Abstract:

In the 21st century were four generation workforces are working, it’s crucial for organizations to build talent management strategy, as tech-savvy Gen Y has entered the work force. They are more connected to each other than ever – through the internet enabled Social media networks Social media has become important in today’s world. The users of such Social media sites have increased in multiple. From sharing their opinion for a brand/product to researching a company before going for an interview, making a conception about a company’s culture or following a Company’s updates due to sheer interest or for job vacancy, Work force today is constantly in touch with social networks. Thus corporate world has rightly realized its potential uses for business purpose. Companies now use social media for marketing, advertising, consumer survey, etc. For HR professionals, it is used for networking and connecting to the Talent pool- through Talent Community. Social recruiting is the process of sourcing or hiring candidates through the use of social sites such as LinkedIn, Facebook Twitter which provide them with an array of information about potential employee; this study represents an exploratory investigation on the role of social networking sites in recruitment. The primarily aim is to analyze the factors that can enhance the channel of recruitment used by of the recruiter with specific reference to the IT organizations in Mumbai, India. Particularly, the aim is to identify how and why companies use social media to attract and screen applicants during their recruitment processes. It also examines the advantages and limitations of recruitment through social media for employers. This is done by literature review. Further, the papers examine the recruiter impact and understand the various opportunities which have created due to technology, thus, to analyze and examine these factors, both primary, as well as secondary data, are collected for the study. The primary data are gathered from five HR manager working in five top IT organizations in Mumbai and 100 HR consultants’ i.e., recruiter. The data was collected by conducting a survey and supplying a closed-ended questionnaire. A comprehension analysis of the study is depicted through graphs and figures. From the analysis, it was observed that there exists a positive relationship between the level of employee recruited through social media and their organizational commitment. Finally the findings show that company’s i.e. recruiters are currently using social media in recruitment, but perhaps not as effective as they could be. The paper gives recommendations and conditions for success that can help employers to make the most out of social media in recruitment.

Keywords: recruitment, social media, social sites, workforce

Procedia PDF Downloads 167
1569 Sensor Network Routing Optimization by Simulating Eurygaster Life in Wheat Farms

Authors: Fariborz Ahmadi, Hamid Salehi, Khosrow Karimi

Abstract:

A sensor network is set of sensor nodes that cooperate together to perform a predefined tasks. The important problem in this network is power consumption. So, in this paper one algorithm based on the eurygaster life is introduced to minimize power consumption by the nodes of these networks. In this method the search space of problem is divided into several partitions and each partition is investigated separately. The evaluation results show that our approach is more efficient in comparison to other evolutionary algorithm like genetic algorithm.

Keywords: evolutionary computation, genetic algorithm, particle swarm optimization, sensor network optimization

Procedia PDF Downloads 404
1568 International Relations and the Transformation of Political Regimes in Post-Soviet States

Authors: Sergey Chirun

Abstract:

Using of a combination of institutional analysis and network access has allowed the author to identify the characteristics of the informal institutions of regional political power and political regimes. According to the author, ‘field’ of activity of post-Soviet regimes, formed under the influence of informal institutions, often contradicts democratic institutional regional changes which are aimed at creating of a legal-rational type of political domination and balanced model of separation of powers. This leads to the gap between the formal structure of institutions and the real nature of power, predetermining the specific character of the existing political regimes.

Keywords: authoritarianism, institutions, political regime, social networks, transformation

Procedia PDF Downloads 476
1567 Exploring Causes of Irregular Migration: Evidence from Rural Punjab, India

Authors: Kulwinder Singh

Abstract:

Punjab is one of the major labour exporting states of India. Every year more than 20,000 youths from Punjab attempt irregular migration. About 84 irregular migrants are from rural areas and 16 per cent from urban areas. Irregular migration could only be achieved if be organized through highly efficient international networks with the countries of origin, transit, and destination. A good number of Punjabis continue to immigrate into the UK for work through unauthorized means entering the country on visit visas and overstaying or getting ‘smuggled into’ the country with the help of transnational networks of agents. Although, the efforts are being made by the government to curb irregular migration through The Punjab Prevention of Human Smuggling Rules (2012, 2014) and Punjab Travel Regulation Act (2012), but yet it exists parallel to regular migration. Despite unprecedented miseries of irregular migrants and strict laws implemented by the state government to check this phenomenon, ‘why do Punjabis migrate abroad irregularly’ is the important question to answer. This study addresses this question through the comparison of irregular migration with regular one. In other words, this analysis reveals major causes, specifically economic ones, of irregular migration from rural Punjab. This study is unique by presenting economics of irregular migration, given previous studies emphasize the role of sociological and psychological factors. Addressing important question “why do Punjabis migrate abroad irregularly?”, the present study reveals that Punjabi, being far-sighted, endeavor irregular migration as it is, though, economically nonviable in short run, but offers lucrative economic gains as gets older. Despite its considerably higher cost viz-a-viz regular migration, it is the better employment option to irregular migrants with higher permanent income than local low paid jobs for which risking life has become the mindset of the rural Punjabis. Although, it carries considerably lower economic benefits as compared to regular migration, but provides the opportunity of migrating abroad to less educated, semi-skilled and language-test ineligible Punjabis who cannot migrate through regular channels. As its positive impacts on source and destination countries are evident, it might not be restricted, rather its effective management, through liberalising restrictive migration policies by destination nations, can protect the interests of all involved stakeholders.

Keywords: cost, migration, income, irregular, regular, remittances

Procedia PDF Downloads 109
1566 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 227
1565 Modelling and Simulation of Natural Gas-Fired Power Plant Integrated to a CO2 Capture Plant

Authors: Ebuwa Osagie, Chet Biliyok, Yeung Hoi

Abstract:

Regeneration energy requirement and ways to reduce it is the main aim of most CO2 capture researches currently being performed and thus, post-combustion carbon capture (PCC) option is identified to be the most suitable for the natural gas-fired power plants. From current research and development (R&D) activities worldwide, two main areas are being examined in order to reduce the regeneration energy requirement of amine-based PCC, namely: (a) development of new solvents with better overall performance than 30wt% monoethanolamine (MEA) aqueous solution, which is considered as the base-line solvent for solvent-based PCC, (b) Integration of the PCC Plant to the power plant. In scaling-up a PCC pilot plant to the size required for a commercial-scale natural gas-fired power plant, process modelling and simulation is very essential. In this work, an integrated process made up of a 482MWe natural gas-fired power plant, an MEA-based PCC plant which is developed and validated has been modelled and simulated. The PCC plant has four absorber columns and a single stripper column, the modelling and simulation was performed with Aspen Plus® V8.4. The gas turbine, the heat recovery steam generator and the steam cycle were modelled based on a 2010 US DOE report, while the MEA-based PCC plant was modelled as a rate-based process. The scaling of the amine plant was performed using a rate based calculation in preference to the equilibrium based approach for 90% CO2 capture. The power plant was integrated to the PCC plant in three ways: (i) flue gas stream from the power plant which is divided equally into four stream and each stream is fed into one of the four absorbers in the PCC plant. (ii) Steam draw-off from the IP/LP cross-over pipe in the steam cycle of the power plant used to regenerate solvent in the reboiler. (iii) Condensate returns from the reboiler to the power plant. The integration of a PCC plant to the NGCC plant resulted in a reduction of the power plant output by 73.56 MWe and the net efficiency of the integrated system is reduced by 7.3 % point efficiency. A secondary aim of this study is the parametric studies which have been performed to assess the impacts of natural gas on the overall performance of the integrated process and this is achieved through investigation of the capture efficiencies.

Keywords: natural gas-fired, power plant, MEA, CO2 capture, modelling, simulation

Procedia PDF Downloads 432
1564 Simulation of Turbulent Flow in Channel Using Generalized Hydrodynamic Equations

Authors: Alex Fedoseyev

Abstract:

This study explores Generalized Hydrodynamic Equations (GHE) for the simulation of turbulent flows. The GHE was derived from the Generalized Boltzmann Equation (GBE) by Alexeev (1994). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considered particles of finite dimensions, Alexeev (1994). The GHE has new terms, temporal and spatial fluctuations compared to the Navier-Stokes equations (NSE). These new terms have a timescale multiplier τ, and the GHE becomes the NSE when τ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The turbulence phenomenon is not well understood and is not described by NSE. An additional one or two equations are required for the turbulence model, which may have to be tuned for specific problems. We show that, in the case of the GHE, no additional turbulence model is needed, and the turbulent velocity profile is obtained from the GHE. The 2D turbulent channel and circular pipe flows were investigated using a numerical solution of the GHE for several cases. The solutions are compared with the experimental data in the circular pipes and 2D channels by Nicuradse (1932, Prandtl Lab), Hussain and Reynolds (1975), Wei and Willmarth (1989), Van Doorne (2007), theory by Wosnik, Castillo and George (2000), and the relevant experiments on Superpipe setup at Princeton, data by Zagarola (1996) and Zagarola and Smits (1998), the Reynolds number is from Re=7200 to Re=960000. The numerical solution data compared well with the experimental data, as well as with the approximate analytical solution for turbulent flow in channel Fedoseyev (2023). The obtained results confirm that the Alexeev generalized hydrodynamic theory (GHE) is in good agreement with the experiments for turbulent flows. The proposed approach is limited to 2D and 3D axisymmetric channel geometries. Further work will extend this approach by including channels with square and rectangular cross-sections.

Keywords: comparison with experimental data. generalized hydrodynamic equations, numerical solution, turbulent boundary layer, turbulent flow in channel

Procedia PDF Downloads 50
1563 A Low Power Consumption Routing Protocol Based on a Meta-Heuristics

Authors: Kaddi Mohammed, Benahmed Khelifa D. Benatiallah

Abstract:

A sensor network consists of a large number of sensors deployed in areas to monitor and communicate with each other through a wireless medium. The collected routing data in the network consumes most of the energy of the sensor nodes. For this purpose, multiple routing approaches have been proposed to conserve energy resource at the sensors and to overcome the challenges of its limitation. In this work, we propose a new low energy consumption routing protocol for wireless sensor networks based on a meta-heuristic methods. Our protocol is to operate more fairly energy when routing captured data to the base station.

Keywords: WSN, routing, energy, heuristic

Procedia PDF Downloads 326
1562 Effect of Gravity on the Controlled Cooling of a Steel Block by Impinging Water Jets

Authors: E.K.K. Agyeman, P. Mousseau, A. Sarda, D. Edelin

Abstract:

The uniform and controlled cooling of hot metals by the circulation of water in canals remains a challenge due to the phase change of the water and the high heat fluxes associated with the phase change. This is because, during the cooling process, the phases are not uniformly distributed along the canals with the liquid phase dominating at the entrances of the canals and the gaseous phase dominating towards the exits. The difference in thermal properties between both phases leads to a heterogeneous temperature distribution in the part being cooled. Slowing down the cooling process is also a challenge due to the high heat fluxes associated with the phase change of water. This study investigates the use of multiple water jets for the controlled and homogenous cooling of hot metal parts and the effect of gravity on the effectiveness of the cooling process with a potential application in the cooling of composite forming moulds. A hole is bored at the centre of a steel block along its length. The jets are generated from the holes of a perforated steel pipe which is placed along the centre of the hole bored in the steel block. The evolution of the temperature with respect to time on the external surface of the steel block is measured simultaneously by thermocouples and an infrared camera. Different jet positions are tested in order to identify the jet placement configuration that ensures the most homogenous cooling of the block while the cooling speed is controlled by an intermittent impingement of the jets. In order to study the effect of gravity on the cooling process, a scenario where the jets are oriented in the opposite direction to that of gravity is compared to one where the jets are aligned in the same direction as gravity. It’s observed that orienting the jets in the direction of gravity reduces the effectiveness of the cooling process on the face of the block facing the impinging jets. This is due to the formation of a deeper pool of water due to the effect gravity and of the curved surface of the canal. This deeper pool of water influences the boiling regime characterized by a slower bubble evacuation when compared to the scenario where the jets are opposed to gravity.

Keywords: cooling speed, gravity, homogenous cooling, jet impingement

Procedia PDF Downloads 113
1561 A Survey of Attacks and Security Requirements in Wireless Sensor Networks

Authors: Vishnu Pratap Singh Kirar

Abstract:

Wireless sensor network (WSN) is a network of many interconnected networked systems, they equipped with energy resources and they are used to detect other physical characteristics. On WSN, there are many researches are performed in past decades. WSN applicable in many security systems govern by military and in many civilian related applications. Thus, the security of WSN gets attention of researchers and gives an opportunity for many future aspects. Still, there are many other issues are related to deployment and overall coverage, scalability, size, energy efficiency, quality of service (QoS), computational power and many more. In this paper we discus about various applications and security related issue and requirements of WSN.

Keywords: wireless sensor network (WSN), wireless network attacks, wireless network security, security requirements

Procedia PDF Downloads 468
1560 A Survey on Positive Real and Strictly Positive Real Scalar Transfer Functions

Authors: Mojtaba Hakimi-Moghaddam

Abstract:

Positive real and strictly positive real transfer functions are important concepts in the control theory. In this paper, the results of researches in these areas are summarized. Definitions together with their graphical interpretations are mentioned. The equivalent conditions in the frequency domain and state space representations are reviewed. Their equivalent electrical networks are explained. Also, a comprehensive discussion about a difference between behavior of real part of positive real and strictly positive real transfer functions in high frequencies is presented. Furthermore, several illustrative examples are given.

Keywords: real rational transfer functions, positive realness property, strictly positive realness property, equivalent conditions

Procedia PDF Downloads 368
1559 Storm-Runoff Simulation Approaches for External Natural Catchments of Urban Sewer Systems

Authors: Joachim F. Sartor

Abstract:

According to German guidelines, external natural catchments are greater sub-catchments without significant portions of impervious areas, which possess a surface drainage system and empty in a sewer network. Basically, such catchments should be disconnected from sewer networks, particularly from combined systems. If this is not possible due to local conditions, their flow hydrographs have to be considered at the design of sewer systems, because the impact may be significant. Since there is a lack of sufficient measurements of storm-runoff events for such catchments and hence verified simulation methods to analyze their design flows, German standards give only general advices and demands special considerations in such cases. Compared to urban sub-catchments, external natural catchments exhibit greatly different flow characteristics. With increasing area size their hydrological behavior approximates that of rural catchments, e.g. sub-surface flow may prevail and lag times are comparable long. There are few observed peak flow values and simple (mostly empirical) approaches that are offered by literature for Central Europe. Most of them are at least helpful to crosscheck results that are achieved by simulation lacking calibration. Using storm-runoff data from five monitored rural watersheds in the west of Germany with catchment areas between 0.33 and 1.07 km2 , the author investigated by multiple event simulation three different approaches to determine the rainfall excess. These are the modified SCS variable run-off coefficient methods by Lutz and Zaiß as well as the soil moisture model by Ostrowski. Selection criteria for storm events from continuous precipitation data were taken from recommendations of M 165 and the runoff concentration method (parallel cascades of linear reservoirs) from a DWA working report to which the author had contributed. In general, the two run-off coefficient methods showed results that are of sufficient accuracy for most practical purposes. The soil moisture model showed no significant better results, at least not to such a degree that it would justify the additional data collection that its parameter determination requires. Particularly typical convective summer events after long dry periods, that are often decisive for sewer networks (not so much for rivers), showed discrepancies between simulated and measured flow hydrographs.

Keywords: external natural catchments, sewer network design, storm-runoff modelling, urban drainage

Procedia PDF Downloads 135
1558 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 115
1557 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 112
1556 Exploring Barriers to Social Innovation: Swedish Experiences from Nine Research Circles

Authors: Claes Gunnarsson, Karin Fröding, Nina Hasche

Abstract:

Innovation is a necessity for the evolution of societies and it is also a driving force in human life that leverages value creation among cross-sector participants in various network arrangements. Social innovations can be characterized as the creation and implementation of a new solution to a social problem, which is more effective and sustainable than existing solutions in terms of improvement of society’s conditions and in particular social inclusion processes. However, barriers exist which may restrict the potential of social innovations to live up to its promise as a societal welfare promoting driving force. The literature points at difficulties in tackling social problems primarily related to problem complexity, access to networks, and lack of financial muscles. Further research is warranted at detailed at detail clarification of these barriers, also connected to recognition of the interplay between institutional logics on the development of cross-sector collaborations in networks and the organizing processes to achieve innovation barrier break-through. There is also a need to further elaborate how obstacles that spur a difference between the actual and desired state of innovative value creating service systems can be overcome. The purpose of this paper is to illustrate barriers to social innovations, based on qualitative content analysis of 36 dialogue-based seminars (i.e. research circles) with nine Swedish focus groups including more than 90 individuals representing civil society organizations, private business, municipal offices, and politicians; and analyze patterns that reveal constituents of barriers to social innovations. The paper draws on central aspects of innovation barriers as discussed in the literature and analyze barriers basically related to internal/external and tangible/intangible characteristics. The findings of this study are that existing institutional structures highly influence the transformative potential of social innovations, as well as networking conditions in terms of building a competence-propelled strategy, which serves as an offspring for overcoming barriers of competence extension. Both theoretical and practical knowledge will contribute to how policy-makers and SI-practitioners can facilitate and support social innovation processes to be contextually adapted and implemented across areas and sectors.

Keywords: barriers, research circles, social innovation, service systems

Procedia PDF Downloads 240
1555 Monitoring and Prediction of Intra-Crosstalk in All-Optical Network

Authors: Ahmed Jedidi, Mesfer Mohammed Alshamrani, Alwi Mohammad A. Bamhdi

Abstract:

Optical performance monitoring and optical network management are essential in building a reliable, high-capacity, and service-differentiation enabled all-optical network. One of the serious problems in this network is the fact that optical crosstalk is additive, and thus the aggregate effect of crosstalk over a whole AON may be more nefarious than a single point of crosstalk. As results, we note a huge degradation of the Quality of Service (QoS) in our network. For that, it is necessary to identify and monitor the impairments in whole network. In this way, this paper presents new system to identify and monitor crosstalk in AONs in real-time fashion. particular, it proposes a new technique to manage intra-crosstalk in objective to relax QoS of the network.

Keywords: all-optical networks, optical crosstalk, optical cross-connect, crosstalk, monitoring crosstalk

Procedia PDF Downloads 443
1554 Smart Irrigation Systems and Website: Based Platform for Farmer Welfare

Authors: Anusha Jain, Santosh Vishwanathan, Praveen K. Gupta, Shwetha S., Kavitha S. N.

Abstract:

Agriculture has a major impact on the Indian economy, with the highest employment ratio than any sector of the country. Currently, most of the traditional agricultural practices and farming methods are manual, which results in farmers not realizing their maximum productivity often due to increasing in labour cost, inefficient use of water sources leading to wastage of water, inadequate soil moisture content, subsequently leading to food insecurity of the country. This research paper aims to solve this problem by developing a full-fledged web application-based platform that has the capacity to associate itself with a Microcontroller-based Automated Irrigation System which schedules the irrigation of crops based on real-time soil moisture content employing soil moisture sensors centric to the crop’s requirements using WSN (Wireless Sensor Networks) and M2M (Machine To Machine Communication) concepts, thus optimizing the use of the available limited water resource, thereby maximizing the crop yield. This robust automated irrigation system provides end-to-end automation of Irrigation of crops at any circumstances such as droughts, irregular rainfall patterns, extreme weather conditions, etc. This platform will also be capable of achieving a nationwide united farming community and ensuring the welfare of farmers. This platform is designed to equip farmers with prerequisite knowledge on tech and the latest farming practices in general. In order to achieve this, the MailChimp mailing service is used through which interested farmers/individuals' email id will be recorded and curated articles on innovations in the world of agriculture will be provided to the farmers via e-mail. In this proposed system, service is enabled on the platform where nearby crop vendors will be able to enter their pickup locations, accepted prices and other relevant information. This will enable farmers to choose their vendors wisely. Along with this, we have created a blogging service that will enable farmers and agricultural enthusiasts to share experiences, helpful knowledge, hardships, etc., with the entire farming community. These are some of the many features that the platform has to offer.

Keywords: WSN (wireless sensor networks), M2M (M/C to M/C communication), automation, irrigation system, sustainability, SAAS (software as a service), soil moisture sensor

Procedia PDF Downloads 112
1553 An Architecture Based on Capsule Networks for the Identification of Handwritten Signature Forgery

Authors: Luisa Mesquita Oliveira Ribeiro, Alexei Manso Correa Machado

Abstract:

Handwritten signature is a unique form for recognizing an individual, used to discern documents, carry out investigations in the criminal, legal, banking areas and other applications. Signature verification is based on large amounts of biometric data, as they are simple and easy to acquire, among other characteristics. Given this scenario, signature forgery is a worldwide recurring problem and fast and precise techniques are needed to prevent crimes of this nature from occurring. This article carried out a study on the efficiency of the Capsule Network in analyzing and recognizing signatures. The chosen architecture achieved an accuracy of 98.11% and 80.15% for the CEDAR and GPDS databases, respectively.

Keywords: biometrics, deep learning, handwriting, signature forgery

Procedia PDF Downloads 65
1552 Analysis of the IEEE 802.15.4 MAC Parameters to Achive Lower Packet Loss Rates

Authors: Imen Bouazzi

Abstract:

The IEEE-802.15.4 standard utilizes the CSMA-CA mechanism to control nodes access to the shared wireless communication medium. It is becoming the popular choice for various applications of surveillance and control used in wireless sensor network (WSN). The benefit of this standard is evaluated regarding of the packet loss probability who depends on the configuration of IEEE 802.15.4 MAC parameters and the traffic load. Our exigency is to evaluate the effects of various configurable MAC parameters on the performance of beaconless IEEE 802.15.4 networks under different traffic loads, static values of IEEE 802.15.4 MAC parameters (macMinBE, macMaxCSMABackoffs, and macMaxFrame Retries) will be evaluated. To performance analysis, we use ns-2[2] network simulator.

Keywords: WSN, packet loss, CSMA/CA, IEEE-802.15.4

Procedia PDF Downloads 322
1551 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 69