Search results for: capacity performance
9857 An Analysis of Non-Elliptic Curve Based Primality Tests
Authors: William Wong, Zakaria Alomari, Hon Ching Lai, Zhida Li
Abstract:
Modern-day information security depends on implementing Diffie-Hellman, which requires the generation of prime numbers. Because the number of primes is infinite, it is impractical to store prime numbers for use, and therefore, primality tests are indispensable in modern-day information security. A primality test is a test to determine whether a number is prime or composite. There are two types of primality tests, which are deterministic tests and probabilistic tests. Deterministic tests are adopting algorithms that provide a definite answer whether a given number is prime or composite. While in probabilistic tests, a probabilistic result would be provided, there is a degree of uncertainty. In this paper, we review three probabilistic tests: the Fermat Primality Test, the Miller-Rabin Test, and the Baillie-PSW Test, as well as one deterministic test, the Agrawal-Kayal-Saxena (AKS) Test. Furthermore, we do an analysis of these tests. All of the reviews discussed are not based on the Elliptic Curve. The analysis demonstrates that, in the majority of real-world scenarios, the Baillie- PSW test’s favorability stems from its typical operational complexity of O(log 3n) and its capacity to deliver accurate results for numbers below 2^64.Keywords: primality tests, Fermat’s primality test, Miller-Rabin primality test, Baillie-PSW primality test, AKS primality test
Procedia PDF Downloads 889856 Impact of Increasing Distributed Solar PV Systems on Distribution Networks in South Africa
Authors: Aradhna Pandarum
Abstract:
South Africa is experiencing an exponential growth of distributed solar PV installations. This is due to various factors with the predominant one being increasing electricity tariffs along with decreasing installation costs, resulting in attractive business cases to some end-users. Despite there being a variety of economic and environmental advantages associated with the installation of PV, their potential impact on distribution grids has yet to be thoroughly investigated. This is especially true since the locations of these units cannot be controlled by Network Service Providers (NSPs) and their output power is stochastic and non-dispatchable. This report details two case studies that were completed to determine the possible voltage and technical losses impact of increasing PV penetration in the Northern Cape of South Africa. Some major impacts considered for the simulations were ramping of PV generation due to intermittency caused by moving clouds, the size and overall hosting capacity and the location of the systems. The main finding is that the technical impact is different on a constrained feeder vs a non-constrained feeder. The acceptable PV penetration level is much lower for a constrained feeder than a non-constrained feeder, depending on where the systems are located.Keywords: medium voltage networks, power system losses, power system voltage, solar photovoltaic
Procedia PDF Downloads 1539855 Collaborative Energy Optimization for Multi-Microgrid Distribution System Based on Two-Stage Game Approach
Authors: Hanmei Peng, Yiqun Wang, Mao Tan, Zhuocen Dai, Yongxin Su
Abstract:
Efficient energy management in multi-microgrid distribution systems holds significant importance for enhancing the economic benefits of regional power grids. To better balance conflicts among various stakeholders, a two-stage game-based collaborative optimization approach is proposed in this paper, effectively addressing the realistic scenario involving both competition and collaboration among stakeholders. The first stage, aimed at maximizing individual benefits, involves constructing a non-cooperative tariff game model for the distribution network and surplus microgrid. In the second stage, considering power flow and physical line capacity constraints we establish a cooperative P2P game model for the multi-microgrid distribution system, and the optimization involves employing the Lagrange method of multipliers to handle complex constraints. Simulation results demonstrate that the proposed approach can effectively improve the system economics while harmonizing individual and collective rationality.Keywords: cooperative game, collaborative optimization, multi-microgrid distribution system, non-cooperative game
Procedia PDF Downloads 719854 Evaluation of Risk and the Beneficial Effects of Synthesized Nano Silver-Based Disinfectant on Poultry Mortality and Health
Authors: Indrajeet Kumar, Jayanta Bhattacharya
Abstract:
This study was evaluated for the potential use of nanosilver (nAg) as a disinfectant and antimicrobial growth promoter supplement for the poultry. The experiments were conducted in the Kangsabati river basin region, in West Medinipur district, West Bengal, India for six months. Two poultry farms were adopted for the experiment. The rural economy of this region from Jhargram to Barkola is heavily dependent on contract poultry farming. The water samples were collected from the water source of poultry farm which has been used for poultry drinking purpose. The bacteriological analysis of water sample revealed that the total bacterial count (total coliform and E. coli) were higher than the acceptable standards. The bacterial loads badly affected the growth performance and health of the poultry. For disinfection, a number of chemical compounds (like formaldehyde, calcium hypochloride, sodium hypochloride, and sodium bicarbonate) have been used in typical commercial formulations. However, the effects of all these chemical compounds have not been significant over time. As a part of our research-to-market initiative, we used nanosilver (nAg) formulation as a disinfectant. The nAg formulation was synthesized by hydrothermal technique and characterized by UV-visible, TEM, SEM, and EDX. The obtained results revealed that the mortality rate of poultry was reduced due to nAg formulation compared to the mortality rate of the negative control. Moreover, the income of the farmer family was increased by 10-20% due to less mortality and better health of the poultry.Keywords: farm water, nanosilver, field application, and poultry performance
Procedia PDF Downloads 1629853 Great Food, No Atmosphere: A Review of Performance Nutrition for Application to Extravehicular Activities in Spaceflight
Authors: Lauren E. Church
Abstract:
Background: Extravehicular activities (EVAs) are a critical aspect of missions aboard the International Space Station (ISS). It has long been noted that the spaceflight environment and the physical demands of EVA cause physiological and metabolic changes in humans; this review aims to combine these findings with nutritional studies in analogues of the spaceflight and EVA environments to make nutritional recommendations for astronauts scheduled for and immediately returning from EVAs. Results: Energy demands increase during orbital spaceflight and see further increases during EVA. Another critical element of EVA nutrition is adequate hydration. Orbital EVA appears to provide adequate hydration under current protocol, but during lunar surface EVA (LEVA) and in a 10km lunar walk-back test astronauts have stated that up to 20% more water was needed. Previous attempts for in-suit edible sustenance have not been adequately taken up by astronauts to be economically viable. In elite endurance athletes, a mixture of glucose and fructose is used in gels, improving performance. Discussion: A combination of non-caffeinated energy drink and simple water should be available for astronauts during EVA, allowing more autonomy. There should also be provision of gels or a similar product containing appropriate sodium levels to maintain hydration, but not so much as to hyperhydrate through renal water reabsorption. It is also suggested that short breaks be built into the schedule of EVAs for these gels to be consumed, as it is speculated that reason for low uptake of in-suit sustenance is the lack of time available in which to consume it.Keywords: astronaut, nutrition, space, sport
Procedia PDF Downloads 1289852 Tandem Concentrated Photovoltaic-Thermoelectric Hybrid System: Feasibility Analysis and Performance Enhancement Through Material Assessment Methodology
Authors: Shuwen Hu, Yuancheng Lou, Dongxu Ji
Abstract:
Photovoltaic (PV) power generation, as one of the most commercialized methods to utilize solar power, can only convert a limited range of solar spectrum into electricity, whereas the majority of the solar energy is dissipated as heat. To address this problem, thermoelectric (TE) module is often integrated with the concentrated PV module for waste heat recovery and regeneration. In this research, a feasibility analysis is conducted for the tandem concentrated photovoltaic-thermoelectric (CPV-TE) hybrid system considering various operational parameters as well as TE material properties. Furthermore, the power output density of the CPV-TE hybrid system is maximized by selecting the optimal TE material with application of a systematic assessment methodology. In the feasibility analysis, CPV-TE is found to be more advantageous than sole CPV system except under high optical concentration ratio with low cold side convective coefficient. It is also shown that the effects of the TE material properties, including Seebeck coefficient, thermal conductivity, and electrical resistivity, on the feasibility of CPV-TE are interacted with each other and might have opposite effect on the system performance under different operational conditions. In addition, the optimal TE material selected by the proposed assessment methodology can improve the system power output density by 227 W/m2 under highly concentrated solar irradiance hence broaden the feasible range of CPV-TE considering optical concentration ratio.Keywords: feasibility analysis, material assessment methodology, photovoltaic waste heat recovery, tandem photovoltaic-thermoelectric
Procedia PDF Downloads 729851 Coarse-Grained Computational Fluid Dynamics-Discrete Element Method Modelling of the Multiphase Flow in Hydrocyclones
Authors: Li Ji, Kaiwei Chu, Shibo Kuang, Aibing Yu
Abstract:
Hydrocyclones are widely used to classify particles by size in industries such as mineral processing and chemical processing. The particles to be handled usually have a broad range of size distributions and sometimes density distributions, which has to be properly considered, causing challenges in the modelling of hydrocyclone. The combined approach of Computational Fluid Dynamics (CFD) and Discrete Element Method (DEM) offers convenience to model particle size/density distribution. However, its direct application to hydrocyclones is computationally prohibitive because there are billions of particles involved. In this work, a CFD-DEM model with the concept of the coarse-grained (CG) model is developed to model the solid-fluid flow in a hydrocyclone. The DEM is used to model the motion of discrete particles by applying Newton’s laws of motion. Here, a particle assembly containing a certain number of particles with same properties is treated as one CG particle. The CFD is used to model the liquid flow by numerically solving the local-averaged Navier-Stokes equations facilitated with the Volume of Fluid (VOF) model to capture air-core. The results are analyzed in terms of fluid and solid flow structures, and particle-fluid, particle-particle and particle-wall interaction forces. Furthermore, the calculated separation performance is compared with the measurements. The results obtained from the present study indicate that this approach can offer an alternative way to examine the flow and performance of hydrocyclonesKeywords: computational fluid dynamics, discrete element method, hydrocyclone, multiphase flow
Procedia PDF Downloads 4089850 Experimental Investigation of Heat Pipe with Annular Fins under Natural Convection at Different Inclinations
Authors: Gangacharyulu Dasaroju, Sumeet Sharma, Sanjay Singh
Abstract:
Heat pipe is characterised as superconductor of heat because of its excellent heat removal ability. The operation of several engineering system results in generation of heat. This may cause several overheating problems and lead to failure of the systems. To overcome this problem and to achieve desired rate of heat dissipation, there is need to study the performance of heat pipe with annular fins under free convection at different inclinations. This study demonstrates the effect of different mass flow rate of hot fluid into evaporator section on the condenser side heat transfer coefficient with annular fins under natural convection at different inclinations. In this study annular fins are used for the experimental work having dimensions of length of fin, thickness of fin and spacing of fin as 10 mm, 1 mm and 6 mm, respectively. The main aim of present study is to discover at what inclination angles the maximum heat transfer coefficient shall be achieved. The heat transfer coefficient on the external surface of heat pipe condenser section is determined by experimental method and then predicted by empirical correlations. The results obtained from experimental and Churchill and Chu relation for laminar are in fair agreement with not more than 22% deviation. It is elucidated the maximum heat transfer coefficient of 31.2 W/(m2-K) at 25˚ tilt angle and minimal condenser heat transfer coefficient of 26.4 W/(m2-K) is seen at 45˚ tilt angle and 200 ml/min mass flow rate. Inclination angle also affects the thermal performance of heat pipe. Beyond 25o inclination, heat transport rate starts to decrease.Keywords: heat pipe, annular fins, natural convection, condenser heat transfer coefficient, tilt angle
Procedia PDF Downloads 1549849 Digital Design and Practice of The Problem Based Learning in College of Medicine, Qassim University, Saudi Arabia
Authors: Ahmed Elzainy, Abir El Sadik, Waleed Al Abdulmonem, Ahmad Alamro, Homaidan Al-Homaidan
Abstract:
Problem-based learning (PBL) is an educational modality which stimulates critical and creative thinking. PBL has been practiced in the college of medicine, Qassim University, Saudi Arabia, since the 2002s with offline face to face activities. Therefore, crucial technological changes in paperless work were needed. The aim of the present study was to design and implement the digitalization of the PBL activities and to evaluate its impact on students' and tutors’ performance. This approach promoted the involvement of all stakeholders after their awareness of the techniques of using online tools. IT support, learning resources facilities, and required multimedia were prepared. Students’ and staff perception surveys reflected their satisfaction with these remarkable changes. The students were interested in the new digitalized materials and educational design, which facilitated the conduction of PBL sessions and provided sufficient time for discussion and peer sharing of knowledge. It enhanced the tutors for supervision and tracking students’ activities on the Learning Management System. It could be concluded that introducing of digitalization of the PBL activities promoted the students’ performance, engagement and enabled a better evaluation of PBL materials and getting prompt students as well as staff feedback. These positive findings encouraged the college to implement the digitalization approach in other educational activities, such as Team-Based Learning, as an additional opportunity for further development.Keywords: multimedia in PBL, online PBL, problem-based learning, PBL digitalization
Procedia PDF Downloads 1209848 Upflow Anaerobic Sludge Blanket Reactor Followed by Dissolved Air Flotation Treating Municipal Sewage
Authors: Priscila Ribeiro dos Santos, Luiz Antonio Daniel
Abstract:
Inadequate access to clean water and sanitation has become one of the most widespread problems affecting people throughout the developing world, leading to an unceasing need for low-cost and sustainable wastewater treatment systems. The UASB technology has been widely employed as a suitable and economical option for the treatment of sewage in developing countries, which involves low initial investment, low energy requirements, low operation and maintenance costs, high loading capacity, short hydraulic retention times, long solids retention times and low sludge production. Whereas dissolved air flotation process is a good option for the post-treatment of anaerobic effluents, being capable of producing high quality effluents in terms of total suspended solids, chemical oxygen demand, phosphorus, and even pathogens. This work presents an evaluation and monitoring, over a period of 6 months, of one compact full-scale system with this configuration, UASB reactors followed by dissolved air flotation units (DAF), operating in Brazil. It was verified as a successful treatment system, and an issue of relevance since dissolved air flotation process treating UASB reactor effluents is not widely encompassed in the literature. The study covered the removal and behavior of several variables, such as turbidity, total suspend solids (TSS), chemical oxygen demand (COD), Escherichia coli, total coliforms and Clostridium perfringens. The physicochemical variables were analyzed according to the protocols established by the Standard Methods for Examination of Water and Wastewater. For microbiological variables, such as Escherichia coli and total coliforms, it was used the “pour plate” technique with Chromocult Coliform Agar (Merk Cat. No.1.10426) serving as the culture medium, while the microorganism Clostridium perfringens was analyzed through the filtering membrane technique, with the Ágar m-CP (Oxoid Ltda, England) serving as the culture medium. Approximately 74% of total COD was removed in the UASB reactor, and the complementary removal done during the flotation process resulted in 88% of COD removal from the raw sewage, thus the initial concentration of COD of 729 mg.L-1 decreased to 87 mg.L-1. Whereas, in terms of particulate COD, the overall removal efficiency for the whole system was about 94%, decreasing from 375 mg.L-1 in raw sewage to 29 mg.L-1 in final effluent. The UASB reactor removed on average 77% of the TSS from raw sewage. While the dissolved air flotation process did not work as expected, removing only 30% of TSS from the anaerobic effluent. The final effluent presented an average concentration of 38 mg.L-1 of TSS. The turbidity was significantly reduced, leading to an overall efficiency removal of 80% and a final turbidity of 28 NTU.The treated effluent still presented a high concentration of fecal pollution indicators (E. coli, total coliforms, and Clostridium perfringens), showing that the system did not present a good performance in removing pathogens. Clostridium perfringens was the organism which suffered the higher removal by the treatment system. The results can be considered satisfactory for the physicochemical variables, taking into account the simplicity of the system, besides that, it is necessary a post-treatment to improve the microbiological quality of the final effluent.Keywords: dissolved air flotation, municipal sewage, UASB reactor, treatment
Procedia PDF Downloads 3319847 Comparison and Evaluation of Joomla and WordPress Web Content Management Systems for Effective Site Administration
Authors: Abubakar Ibrahim, Muhammad Garba, Adelusi Oluwaseyi Abiodun
Abstract:
Website development and administration has already become a very critical issue in many organisations due to the fact that most of the organisations have embraced the use of the internet to deliver their services and products seamlessly but even with huge advantages of being present on the internet, and website are very difficult and expensive to develop and maintain. In recent years, a number of open-source web Contents Management System (CMS) have been developed to allow organisations to internally develop and maintain their websites without the need to hire professional web developers to provide such services for them. This study aimed at performing a comparative analysis of the two most widely used open source CMS Joomla and wordpress, based on the following criteria: intuitiveness, responsiveness richness in features, meeting expectation, fill secured, ease of navigation, structure, and performance. Two identical applications were developed using the said CMS. In this study, a purposive sampling technique was adopted to administer the questionnaires, and a total of 50 respondents were selected to surf sites and fill out a questionnaire based on their experience on the two sites. Gt-matrix was used to carry out further analysis of the applications. The result shows that Joomla is the best for developing an e-commerce site due to the fact that it is best in terms of performance, better structure, meeting user expectations, rich features, and functionality. Even though Wordpress is intuitive and easy for navigation. One can still argue that Joomla is superior.Keywords: open source, content management system, Joomla, WordPress
Procedia PDF Downloads 609846 Implementation of Successive Interference Cancellation Algorithms in the 5g Downlink
Authors: Mokrani Mohamed Amine
Abstract:
In this paper, we have implemented successive interference cancellation algorithms in the 5G downlink. We have calculated the maximum throughput in Frequency Division Duplex (FDD) mode in the downlink, where we have obtained a value equal to 836932 b/ms. The transmitter is of type Multiple Input Multiple Output (MIMO) with eight transmitting and receiving antennas. Each antenna among eight transmits simultaneously a data rate of 104616 b/ms that contains the binary messages of the three users; in this case, the Cyclic Redundancy Check CRC is negligible, and the MIMO category is the spatial diversity. The technology used for this is called Non-Orthogonal Multiple Access (NOMA) with a Quadrature Phase Shift Keying (QPSK) modulation. The transmission is done in a Rayleigh fading channel with the presence of obstacles. The MIMO Successive Interference Cancellation (SIC) receiver with two transmitting and receiving antennas recovers its binary message without errors for certain values of transmission power such as 50 dBm, with 0.054485% errors when the transmitted power is 20dBm and with 0.00286763% errors for a transmitted power of 32 dBm(in the case of user 1) as well as with 0.0114705% errors when the transmitted power is 20 dBm also with 0.00286763% errors for a power of 24 dBm(in the case of user2) by applying the steps involved in SIC.Keywords: 5G, NOMA, QPSK, TBS, LDPC, SIC, capacity
Procedia PDF Downloads 1039845 Energy Potential of Turkey and Evaluation of Solar Energy Technology as an Alternative Energy
Authors: Naci Büyükkaracığan, Murat Ahmet Ökmen
Abstract:
Emerging demand for energy in developing countries rapid population growth and industrialization are causing a rapid increase, such as Turkey. Energy is an important and indispensable factor in the industry. At the same time, energy is one of the main indicators that reflect a country's economic and social development potential. There is a linear relationship between the energy consumption and social development, and in parallel this situation, it is seen that energy consumption increase with economic growth and prosperity. In recent year’s, energy sources consumption is increasingly continuing, because of population growth and economy in Turkey. 80% of the energy used in Turkey is supplied from abroad. At the same time, while almost all of the energy obtained from our country is met by hydropower. Alternatively, studies of determining and using potential renewable energy sources such as solar energy have been realized for recent years. In this study, first of all, the situation of energy sources was examined in Turkey. Information of reserve/capacity, production and consumption values of energy sources were emphasized. For this purpose, energy production and consumption, CO2 emission and electricity energy consumption of countries were investigated. Energy consumption and electricity energy consumption per capita were comparatively analyzed.Keywords: energy potential, alternative energy sources, solar energy, Turkey
Procedia PDF Downloads 4409844 Tracking of Linarin from the Ethyl Acetate Fraction of Melinjo (Gnetum gnemon L.) Seeds Using Preparative High Performance Liquid Chromatography
Authors: Asep Sukohar, Ramadhan Triyandi, Muhammad Iqbal, Sahidin, Suharyani
Abstract:
Introduction: Resveratrol is a class of bioactive chemicals found in melinjo, which has a wide range of biological actions. The purpose of this study is to determine the linarin content of the melinjo fraksi by using preparative-high-performance liquid chromatography (prep-HPLC). Method: Extraction used the soxhletation method with 96% ethanol solvent. Fractionation used ethyl acetate and ethanol in a ratio of 1:1. Tracing of linarin compound used prep-HPLC with a mobile phase ratio of distilled water: methanol (55: 45, v/v). The presence of linarin was detected using a wavelength of 215 nm. Fourier Transform Infrared (FTIR) was used to identify the functional groups of compound. Result: The retention time required to elute the ethyl acetate fraction was 2.601 minutes. Compound separation identification using Fourier Transform Infrared Spectroscopy - Quest Attenuated Total Reflectance (FTIR - QATR) has a similarity value range with standards from 0 to 1000. The elution results of the ethyl acetate fraction have similar values with the standard compounds linarin (668), resveratrol (578), and catechin (455). Conclusion: Tracing for active compound in the ethyl acetate fraction of Gnetum Gnemon L. using prep-HPLC showed a strong suspicion of the presence of linarin compound.Keywords: Gnetum gnemon L., linarin, prep-HPLC, fraction ethyl acetate
Procedia PDF Downloads 1179843 Monocular Depth Estimation Benchmarking with Thermal Dataset
Authors: Ali Akyar, Osman Serdar Gedik
Abstract:
Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction.Keywords: monocular depth estimation, thermal dataset, benchmarking, vision transformers
Procedia PDF Downloads 329842 Worst-Case Load Shedding in Electric Power Networks
Authors: Fu Lin
Abstract:
We consider the worst-case load-shedding problem in electric power networks where a number of transmission lines are to be taken out of service. The objective is to identify a prespecified number of line outages that lead to the maximum interruption of power generation and load at the transmission level, subject to the active power-flow model, the load and generation capacity of the buses, and the phase-angle limit across the transmission lines. For this nonlinear model with binary constraints, we show that all decision variables are separable except for the nonlinear power-flow equations. We develop an iterative decomposition algorithm, which converts the worst-case load shedding problem into a sequence of small subproblems. We show that the subproblems are either convex problems that can be solved efficiently or nonconvex problems that have closed-form solutions. Consequently, our approach is scalable for large networks. Furthermore, we prove the convergence of our algorithm to a critical point, and the objective value is guaranteed to decrease throughout the iterations. Numerical experiments with IEEE test cases demonstrate the effectiveness of the developed approach.Keywords: load shedding, power system, proximal alternating linearization method, vulnerability analysis
Procedia PDF Downloads 1409841 Transformation Strategies of the Nigerian Textile and Clothing Industries: The Integration of China Clothing Sector Model
Authors: Adetoun Adedotun Amubode
Abstract:
Nigeria's Textile Industry was the second largest in Africa after Egypt, with above 250 vibrant factories and over 50 percent capacity utilization contributing to foreign exchange earnings and employment generation. Currently, multifaceted challenges such as epileptic power supply, inconsistent government policies, growing digitalization, smuggling of foreign textiles, insecurity and the inability of the local industries to compete with foreign products, especially Chinese textile, has created a hostile environment for the sector. This led to the closure of most of the textile industries. China's textile industry has experienced institutional change and industrial restructuring, having 30% of the world's market share. This paper examined the strategies adopted by China in transforming her textile and clothing industries and designed a model for the integration of these strategies to improve the competitive strength and growth of the Nigerian textile and clothing industries in a dynamic and changing market. The paper concludes that institutional support, regional production, export-oriented policy, value-added and branding cultivation, technological upgrading and enterprise resource planning be integrated into the Nigerian clothing and textile industries.Keywords: clothing, industry, integration, Nigerian, textile, transformation.
Procedia PDF Downloads 1569840 Design, Fabrication, and Study of Droplet Tube Based Triboelectric Nanogenerators
Authors: Yana Xiao
Abstract:
The invention of Triboelectric Nanogenerators (TENGs) provides an effective approach to the sustainable power of energy. Liquid-solid interfaces-based TENGs have been researched in virtue of less friction for harvesting energy from raindrops, rivers, and oceans in the form of water flows. However, TENGs based on droplets have rarely been investigated. In this study, we have proposed a new kind of droplet tube-based TENG (DT-TENG) with free-standing and reformative grating electrodes. Both straight and curved DT-TENGs were designed, fabricated, and evaluated, including straight tubes TENG with 27 electrodes and curved tubes TENG of 25cm radius curvature- at the inclination of 30°, 45° and 60° respectively. Different materials and hydrophobicity treatments for the tubes have also been studied, together with a discussion on the mechanism and applications of DT-TENGs. As different types of liquid discrepant energy performance, this kind of DT-TENG can be potentially used in laboratories to identify liquid or solvent. In addition, a smart fishing float is contrived, which can recognize different levels of movement speeds brought about by different weights and generate corresponding electric signals to remind the angler. The electric generation performance when using a PVC helix tube around a cylinder is similar in straight situations under the inclination of 45° in this experiment. This new structure changes the direction of a water drop or flows without losing kinetic energy, which makes utilizing Helix-Tube-TENG to harvest energy from different building morphologies possible.Keywords: triboelectric nanogenerator, energy harvest, liquid tribomaterial, structure innovation
Procedia PDF Downloads 909839 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 1899838 An Analytical Study on Rotational Capacity of Beam-Column Joints in Unit Modular Frames
Authors: Kyung-Suk Choi, Hyung-Joon Kim
Abstract:
Modular structural systems are constructed using a method that they are assembled with prefabricated unit modular frames on-site. This provides a benefit that can significantly reduce building construction time. Their structural design is usually carried out under the assumption that the load-carrying mechanism is similar to that of a traditional steel moment-resisting system. However, both systems are different in terms of beam-column connection details which may strongly influence the lateral structural behavior. Specially, the presence of access holes in a beam-column joint of a unit modular frame could cause undesirable failure during strong earthquakes. Therefore, this study carried out finite element analyses (FEM) of unit modular frames to investigate the cyclic behavior of beam-column joints with the structural influence of access holes. Analysis results show that the unit modular frames present stable cyclic response with large deformation capacities, and their joints are classified into semi-rigid connections.Keywords: unit modular frame, steel moment connection, nonlinear analytical model, moment-rotation relation
Procedia PDF Downloads 6199837 Heterogeneity of Soil Moisture and Its Impacts on the Mountainous Watershed Hydrology in Northwest China
Authors: Chansheng He, Zhongfu Wang, Xiao Bai, Jie Tian, Xin Jin
Abstract:
Heterogeneity of soil hydraulic properties directly affects hydrological processes at different scales. Understanding heterogeneity of soil hydraulic properties such as soil moisture is therefore essential for modeling watershed ecohydrological processes, particularly in hard to access, topographically complex mountainous watersheds. This study maps spatial variations of soil moisture by in situ observation network that consists of sampling points, zones, and tributaries, and monitors corresponding hydrological variables of air and soil temperatures, evapotranspiration, infiltration, and runoff in the Upper Reach of the Heihe River Watershed, a second largest inland river (terminal lake) with a drainage area of over 128,000 km² in Northwest China. Subsequently, the study uses a hydrological model, SWAT (Soil and Water Assessment Tool) to simulate the effects of heterogeneity of soil moisture on watershed hydrological processes. The spatial clustering method, Full-Order-CLK was employed to derive five soil heterogeneous zones (Configuration 97, 80, 65, 40, and 20) for soil input to SWAT. Results show the simulations by the SWAT model with the spatially clustered soil hydraulic information from the field sampling data had much better representation of the soil heterogeneity and more accurate performance than the model using the average soil property values for each soil type derived from the coarse soil datasets. Thus, incorporating detailed field sampling soil heterogeneity data greatly improves performance in hydrologic modeling.Keywords: heterogeneity, soil moisture, SWAT, up-scaling
Procedia PDF Downloads 3469836 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System
Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi
Abstract:
Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.Keywords: channel estimation, OFDM, pilot-assist, VLC
Procedia PDF Downloads 1809835 Development of an Aptamer-Molecularly Imprinted Polymer Based Electrochemical Sensor to Detect Pathogenic Bacteria
Authors: Meltem Agar, Maisem Laabei, Hannah Leese, Pedro Estrela
Abstract:
Pathogenic bacteria and the diseases they cause have become a global problem. Their early detection is vital and can only be possible by detecting the bacteria causing the disease accurately and rapidly. Great progress has been made in this field with the use of biosensors. Molecularly imprinted polymers have gain broad interest because of their excellent properties over natural receptors, such as being stable in a variety of conditions, inexpensive, biocompatible and having long shelf life. These properties make molecularly imprinted polymers an attractive candidate to be used in biosensors. In this study it is aimed to produce an aptamer-molecularly imprinted polymer based electrochemical sensor by utilizing the properties of molecularly imprinted polymers coupled with the enhanced specificity offered by DNA aptamers. These ‘apta-MIP’ sensors were used for the detection of Staphylococcus aureus and Escherichia coli. The experimental parameters for the fabrication of sensor were optimized, and detection of the bacteria was evaluated via Electrochemical Impedance Spectroscopy. Sensitivity and selectivity experiments were conducted. Furthermore, molecularly imprinted polymer only and aptamer only electrochemical sensors were produced separately, and their performance were compared with the electrochemical sensor produced in this study. Aptamer-molecularly imprinted polymer based electrochemical sensor showed good sensitivity and selectivity in terms of detection of Staphylococcus aureus and Escherichia coli. The performance of the sensor was assessed in buffer solution and tap water.Keywords: aptamer, electrochemical sensor, staphylococcus aureus, molecularly imprinted polymer
Procedia PDF Downloads 1189834 Verification of Sr-90 Determination in Water and Spruce Needles Samples Using IAEA-TEL-2016-04 ALMERA Proficiency Test Samples
Authors: S. Visetpotjanakit, N. Nakkaew
Abstract:
Determination of 90Sr in environmental samples has been widely developed with several radioanlytical methods and radiation measurement techniques since 90Sr is one of the most hazardous radionuclides produced from nuclear reactors. Liquid extraction technique using di-(2-ethylhexyl) phosphoric acid (HDEHP) to separate and purify 90Y and Cherenkov counting using liquid scintillation counter to determine 90Y in secular equilibrium to 90Sr was developed and performed at our institute, the Office of Atoms for Peace. The approach is inexpensive, non-laborious, and fast to analyse 90Sr in environmental samples. To validate our analytical performance for the accurate and precise criteria, determination of 90Sr using the IAEA-TEL-2016-04 ALMERA proficiency test samples were performed for statistical evaluation. The experiment used two spiked tap water samples and one naturally contaminated spruce needles sample from Austria collected shortly after the Chernobyl accident. Results showed that all three analyses were successfully passed in terms of both accuracy and precision criteria, obtaining “Accepted” statuses. The two water samples obtained the measured results of 15.54 Bq/kg and 19.76 Bq/kg, which had relative bias 5.68% and -3.63% for the Maximum Acceptable Relative Bias (MARB) 15% and 20%, respectively. And the spruce needles sample obtained the measured results of 21.04 Bq/kg, which had relative bias 23.78% for the MARB 30%. These results confirm our analytical performance of 90Sr determination in water and spruce needles samples using the same developed method.Keywords: ALMERA proficiency test, Cerenkov counting, determination of 90Sr, environmental samples
Procedia PDF Downloads 2329833 Test-Retest Agreement, Random Measurement Error and Practice Effect of the Continuous Performance Test-Identical Pairs for Patients with Schizophrenia
Authors: Kuan-Wei Chen, Chien-Wei Chen, Tai-Ling Chang, Nan-Cheng Chen, Ching-Lin Hsieh, Gong-Hong Lin
Abstract:
Background and Purposes: Deficits in sustained attention are common in patients with schizophrenia. Such impairment can limit patients to effectively execute daily activities and affect the efficacy of rehabilitation. The aims of this study were to examine the test-retest agreement, random measurement error, and practice effect of the Continuous Performance Test-Identical Pairs (CPT-IP) (a commonly used sustained attention test) in patients with schizophrenia. The results can provide empirical evidence for clinicians and researchers to apply a sustained attention test with sound psychometric properties in schizophrenia patients. Methods: We recruited patients with chronic schizophrenia to be assessed twice with 1 week interval using CPT-IP. The intra-class correlation coefficient (ICC) was used to examine the test-retest agreement. The percentage of minimal detectable change (MDC%) was used to examine the random measurement error. Moreover, the standardized response mean (SRM) was used to examine the practice effect. Results: A total of 56 patients participated in this study. Our results showed that the ICC was 0.82, MDC% was 47.4%, and SRMs were 0.36 for the CPT-IP. Conclusion: Our results indicate that CPT-IP has acceptable test-retests agreement, substantial random measurement error, and small practice effect in patients with schizophrenia. Therefore, to avoid overestimating patients’ changes in sustained attention, we suggest that clinicians interpret the change scores of CPT-IP conservatively in their routine repeated assessments.Keywords: schizophrenia, sustained attention, CPT-IP, reliability
Procedia PDF Downloads 3049832 Analytical Investigation on Seismic Behavior of Infilled Reinforced Concrete Frames Strengthened with Precast Diagonal Concrete Panels
Authors: Ceyhun Aksoylu, Rifat Sezer
Abstract:
In this study, a strengthening method applicable without any evacuation process was investigated. In this analytical study, the pushover analysis results carry out by using the software of SAP2000. For this purpose, 1/3 scaled, 1-bay and 2-story R/C seven frames having usual deficiencies faults produced, one of which were not strengthened, but having brick-infill wall and the other 3 frames with infill walls strengthened with various shaped of high strength-precast diagonal concrete panels. The prepared analytical models investigated under reversed-cyclic loading that resembles the seismic effect. As a result of the analytical study, the properties of the reinforced concrete frames, such as strength, rigidity, energy dissipation capacity, etc. were determined and the strengthened models were compared with the unstrengthened one having the same properties. As a result of this study, the contributions of precast diagonal concrete applied on the infill walls of the existing frame systems against seismic effects were introduced with its advantages and disadvantages.Keywords: RC frame, seismic effect, infill wall, strengthening, precast diagonal concrete panel, pushover analysis
Procedia PDF Downloads 3489831 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing
Procedia PDF Downloads 1789830 Assessment of Sidewalk Problems and Their Remedial Measures: Case Study of Dire Dawa Town Kebele 02 Sidewalks, Ethiopia
Authors: Abdurahman Anwar Shfa
Abstract:
A Road sidewalk provides benefits, including safety, mobility, and healthier communities by facilitating the movement of goods and people. It enables increased access to daily living and programs in the country. But, these increases in access may be affected by many factors that pose a great challenge in the individuals’ daily activity, ranging from minor injury to death. Those problems are construction roads without sidewalks, using sidewalks for selling purposes, potholes, and west and trees on sidewalks. In this case, our objective is to identify problems related to sidewalks, assess the accessibility of sidewalks to all users, including pedestrians with disabilities, propose appropriate countermeasures for these problems, and prepare the indicator map. This study was undertaken to investigate the performance problems associated with sidewalk, particularly focusing on specified areas of Dire Dawa city kebele 02, to show the main problems and suggest that important consideration should be given to road sidewalk. To meet the objective of research, it is believed to collect data, review sidewalk construction practices and performance problems reported from ERA manual, and carry out a field reconnaissance. This research encompassed a variety of activities regarding sidewalk, including problems and accidents that occurred due to this problem. The purpose of this research is to identify the type of risk to pedestrians who are walking along a roadway and the reasons for those risks. So, based on our study, the sidewalk of Dire Dawa City kebele 02 is not enough. Those sidewalks are not accessible for all pedestrians, including disability.Keywords: GIS, ERA, GPS, sidewalks way, asphalt road
Procedia PDF Downloads 329829 Reading Comprehension in Profound Deaf Readers
Authors: S. Raghibdoust, E. Kamari
Abstract:
Research show that reduced functional hearing has a detrimental influence on the ability of an individual to establish proper phonological representations of words, since the phonological representations are claimed to mediate the conceptual processing of written words. Word processing efficiency is expected to decrease with a decrease in functional hearing. In other words, it is predicted that hearing individuals would be more capable of word processing than individuals with hearing loss, as their functional hearing works normally. Studies also demonstrate that the quality of the functional hearing affects reading comprehension via its effect on their word processing skills. In other words, better hearing facilitates the development of phonological knowledge, and can promote enhanced strategies for the recognition of written words, which in turn positively affect higher-order processes underlying reading comprehension. The aims of this study were to investigate and compare the effect of deafness on the participants’ abilities to process written words at the lexical and sentence levels through using two online and one offline reading comprehension tests. The performance of a group of 8 deaf male students (ages 8-12) was compared with that of a control group of normal hearing male students. All the participants had normal IQ and visual status, and came from an average socioeconomic background. None were diagnosed with a particular learning or motor disability. The language spoken in the homes of all participants was Persian. Two tests of word processing were developed and presented to the participants using OpenSesame software, in order to measure the speed and accuracy of their performance at the two perceptual and conceptual levels. In the third offline test of reading comprehension which comprised of semantically plausible and semantically implausible subject relative clauses, the participants had to select the correct answer out of two choices. The data derived from the statistical analysis using SPSS software indicated that hearing and deaf participants had a similar word processing performance both in terms of speed and accuracy of their responses. The results also showed that there was no significant difference between the performance of the deaf and hearing participants in comprehending semantically plausible sentences (p > 0/05). However, a significant difference between the performances of the two groups was observed with respect to their comprehension of semantically implausible sentences (p < 0/05). In sum, the findings revealed that the seriously impoverished sentence reading ability characterizing the profound deaf subjects of the present research, exhibited their reliance on reading strategies that are based on insufficient or deviant structural knowledge, in particular in processing semantically implausible sentences, rather than a failure to efficiently process written words at the lexical level. This conclusion, of course, does not mean to say that deaf individuals may never experience deficits at the word processing level, deficits that impede their understanding of written texts. However, as stated in previous researches, it sounds reasonable to assume that the more deaf individuals get familiar with written words, the better they can recognize them, despite having a profound phonological weakness.Keywords: deafness, reading comprehension, reading strategy, word processing, subject and object relative sentences
Procedia PDF Downloads 3389828 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 133