Search results for: network capacity
7380 Wind Energy Status in Turkey
Authors: Mustafa Engin Başoğlu, Bekir Çakir
Abstract:
Since large part of electricity generation is provided by using fossil based resources, energy is an important agenda for countries. Depletion of fossil resources, increasing awareness of climate change and global warming concerns are the major reasons for turning to alternative energy resources. Solar, wind and hydropower energy are the main renewable energy sources. Among of them, wind energy is promising for Turkey whose installed power capacity increases approximately eight times between 2008 - seventh month of 2014. Signing of Kyoto Protocol can be accepted as a milestone for Turkey's energy policy. Turkish government has announced 2023 Vision (2023 targets) in 2010-2014 Strategic Plan prepared by Ministry of Energy and Natural Resources (MENR). 2023 Energy targets can be summarized as follows: Share of renewable energy sources in electricity generation is 30% of total electricity generation by 2023. Installed capacity of wind energy will be 20 GW by 2023. Other renewable energy sources such as solar, hydropower and geothermal are encouraged with new incentive mechanisms. Share of nuclear power plants in electricity generation will be 10% of total electricity generation by 2023. Dependence on foreign energy is reduced for sustainability and energy security. As of seventh month of 2014, total installed capacity of wind power plants is 3.42 GW and a lot of wind power plants are under construction with capacity 1.16 GW. Turkish government also encourages the locally manufactured equipments. MILRES is an important project aimed to promote the use of renewable sources in electricity generation. A 500 kW wind turbine will be produced in the first phase of project. Then 2.5 MW wind turbine will be manufactured domestically within this projectKeywords: wind energy, wind speed, 2023 vision, MILRES, wind energy potential in TURKEY
Procedia PDF Downloads 5437379 Would Intra-Individual Variability in Attention to Be the Indicator of Impending the Senior Adults at Risk of Cognitive Decline: Evidence from Attention Network Test(ANT)
Authors: Hanna Lu, Sandra S. M. Chan, Linda C. W. Lam
Abstract:
Objectives: Intra-individual variability (IIV) has been considered as a biomarker of healthy ageing. However, the composite role of IIV in attention, as an impending indicator for neurocognitive disorders warrants further exploration. This study aims to investigate the IIV, as well as their relationships with attention network functions in adults with neurocognitive disorders (NCD). Methods: 36adults with NCD due to Alzheimer’s disease(NCD-AD), 31adults with NCD due to vascular disease (NCD-vascular), and 137 healthy controls were recruited. Intraindividual standard deviations (iSD) and intraindividual coefficient of variation of reaction time (ICV-RT) were used to evaluate the IIV. Results: NCD groups showed greater IIV (iSD: F= 11.803, p < 0.001; ICV-RT:F= 9.07, p < 0.001). In ROC analyses, the indices of IIV could differentiateNCD-AD (iSD: AUC value = 0.687, p= 0.001; ICV-RT: AUC value = 0.677, p= 0.001) and NCD-vascular (iSD: AUC value = 0.631, p= 0.023;ICV-RT: AUC value = 0.615, p= 0.045) from healthy controls. Moreover, the processing speed could distinguish NCD-AD from NCD-vascular (AUC value = 0.647, p= 0.040). Discussion: Intra-individual variability in attention provides a stable measure of cognitive performance, and seems to help distinguish the senior adults with different cognitive status.Keywords: intra-individual variability, attention network, neurocognitive disorders, ageing
Procedia PDF Downloads 4747378 Reservoir-Triggered Seismicity of Water Level Variation in the Lake Aswan
Authors: Abdel-Monem Sayed Mohamed
Abstract:
Lake Aswan is one of the largest man-made reservoirs in the world. The reservoir began to fill in 1964 and the level rose gradually, with annual irrigation cycles, until it reached a maximum water level of 181.5 m in November 1999, with a capacity of 160 km3. The filling of such large reservoir changes the stress system either through increasing vertical compressional stress by loading and/or increased pore pressure through the decrease of the effective normal stress. The resulted effect on fault zones changes stability depending strongly on the orientation of pre-existing stress and geometry of the reservoir/fault system. The main earthquake occurred on November 14, 1981, with magnitude 5.5. This event occurred after 17 years of the reservoir began to fill, along the active part of the Kalabsha fault and located not far from the High Dam. Numerous of small earthquakes follow this earthquake and continue till now. For this reason, 13 seismograph stations (radio-telemetry network short-period seismometers) were installed around the northern part of Lake Aswan. The main purpose of the network is to monitor the earthquake activity continuously within Aswan region. The data described here are obtained from the continuous record of earthquake activity and lake-water level variation through the period from 1982 to 2015. The seismicity is concentrated in the Kalabsha area, where there is an intersection of the easterly trending Kalabsha fault with the northerly trending faults. The earthquake foci are distributed in two seismic zones, shallow and deep in the crust. Shallow events have focal depths of less than 12 km while deep events extend from 12 to 28 km. Correlation between the seismicity and the water level variation in the lake provides great suggestion to distinguish the micro-earthquakes, particularly, those in shallow seismic zone in the reservoir–triggered seismicity category. The water loading is one factor from several factors, as an activating medium in triggering earthquakes. The common factors for all cases of induced seismicity seem to be the presence of specific geological conditions, the tectonic setting and water loading. The role of the water loading is as a supplementary source of earthquake events. So, the earthquake activity in the area originated tectonically (ML ≥ 4) and the water factor works as an activating medium in triggering small earthquakes (ML ≤ 3). Study of the inducing seismicity from the water level variation in Aswan Lake is of great importance and play great roles necessity for the safety of the High Dam body and its economic resources.Keywords: Aswan lake, Aswan seismic network, seismicity, water level variation
Procedia PDF Downloads 3697377 A Neurosymbolic Learning Method for Uplink LTE-A Channel Estimation
Authors: Lassaad Smirani
Abstract:
In this paper we propose a Neurosymbolic Learning System (NLS) as a channel estimator for Long Term Evolution Advanced (LTE-A) uplink. The proposed system main idea based on Neural Network has modules capable of performing bidirectional information transfer between symbolic module and connectionist module. We demonstrate various strengths of the NLS especially the ability to integrate theoretical knowledge (rules) and experiential knowledge (examples), and to make an initial knowledge base (rules) converted into a connectionist network. Also to use empirical knowledge witch by learning will have the ability to revise the theoretical knowledge and acquire new one and explain it, and finally the ability to improve the performance of symbolic or connectionist systems. Compared with conventional SC-FDMA channel estimation systems, The performance of NLS in terms of complexity and quality is confirmed by theoretical analysis and simulation and shows that this system can make the channel estimation accuracy improved and bit error rate decreased.Keywords: channel estimation, SC-FDMA, neural network, hybrid system, BER, LTE-A
Procedia PDF Downloads 3937376 Estimation of Fouling in a Cross-Flow Heat Exchanger Using Artificial Neural Network Approach
Authors: Rania Jradi, Christophe Marvillet, Mohamed Razak Jeday
Abstract:
One of the most frequently encountered problems in industrial heat exchangers is fouling, which degrades the thermal and hydraulic performances of these types of equipment, leading thus to failure if undetected. And it occurs due to the accumulation of undesired material on the heat transfer surface. So, it is necessary to know about the heat exchanger fouling dynamics to plan mitigation strategies, ensuring a sustainable and safe operation. This paper proposes an Artificial Neural Network (ANN) approach to estimate the fouling resistance in a cross-flow heat exchanger by the collection of the operating data of the phosphoric acid concentration loop. The operating data of 361 was used to validate the proposed model. The ANN attains AARD= 0.048%, MSE= 1.811x10⁻¹¹, RMSE= 4.256x 10⁻⁶ and r²=99.5 % of accuracy which confirms that it is a credible and valuable approach for industrialists and technologists who are faced with the drawbacks of fouling in heat exchangers.Keywords: cross-flow heat exchanger, fouling, estimation, phosphoric acid concentration loop, artificial neural network approach
Procedia PDF Downloads 1967375 Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis
Authors: P. S. Jagadeesh Kumar, Tracy Lin Huan, S. Meenakshi Sundaram
Abstract:
Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.Keywords: Alzheimer's diagnosis, decision trees, deep neural network, machine learning, pattern classification
Procedia PDF Downloads 2957374 Physicochemistry of Pozzolanic Stabilization of a Class A-2-7 Lateritic Soil
Authors: Ahmed O. Apampa, Yinusa A. Jimoh
Abstract:
The paper examines the mechanism of pozzolan-soil reactions, using a recent study on the chemical stabilization of a Class A-2-7 (3) lateritic soil, with corn cob ash (CCA) as case study. The objectives are to establish a nexus between cation exchange capacity of the soil, the alkaline forming compounds in CCA and percentage CCA addition to soil beyond which no more improvement in strength properties can be achieved; and to propose feasible chemical reactions to explain the chemical stabilization of the lateritic soil with CCA alone. The lateritic soil, as well as CCA of pozzolanic quality Class C were separately analysed for their metallic oxide composition using the X-Ray Fluorescence technique. The cation exchange capacity (CEC) of the soil and the CCA were computed theoretically using the percentage composition of the base cations Ca2+, Mg2+ K+ and Na2+ as 1.48 meq/100 g and 61.67 meq/100 g respectively, thus indicating a ratio of 0.024 or 2.4%. This figure, taken as the theoretical amount required to just fill up the exchangeable sites of the clay molecules, compares well with the laboratory observation of 1.5% for the optimum level of CCA addition to lateritic soil. The paper went on to present chemical reaction equations between the alkaline earth metals in the CCA and the silica in the lateritic soil to form silicates, thereby proposing an extension of the theory of mechanism of soil stabilization to cover chemical stabilization with pozzolanic ash only. The paper concluded by recommending further research on the molecular structure of soils stabilized with pozzolanic waste ash alone, with a view to confirming the chemical equations advanced in the study.Keywords: cation exchange capacity, corn cob ash, lateritic soil, soil stabilization
Procedia PDF Downloads 2477373 Instant Fire Risk Assessment Using Artifical Neural Networks
Authors: Tolga Barisik, Ali Fuat Guneri, K. Dastan
Abstract:
Major industrial facilities have a high potential for fire risk. In particular, the indices used for the detection of hidden fire are used very effectively in order to prevent the fire from becoming dangerous in the initial stage. These indices provide the opportunity to prevent or intervene early by determining the stage of the fire, the potential for hazard, and the type of the combustion agent with the percentage values of the ambient air components. In this system, artificial neural network will be modeled with the input data determined using the Levenberg-Marquardt algorithm, which is a multi-layer sensor (CAA) (teacher-learning) type, before modeling the modeling methods in the literature. The actual values produced by the indices will be compared with the outputs produced by the network. Using the neural network and the curves to be created from the resulting values, the feasibility of performance determination will be investigated.Keywords: artifical neural networks, fire, Graham Index, levenberg-marquardt algoritm, oxygen decrease percentage index, risk assessment, Trickett Index
Procedia PDF Downloads 1357372 Research on the Spatial Organization and Collaborative Innovation of Innovation Corridors from the Perspective of Ecological Niche: A Case Study of Seven Municipal Districts in Jiangsu Province, China
Authors: Weikang Peng
Abstract:
The innovation corridor is an important spatial carrier to promote regional collaborative innovation, and its development process is the spatial re-organization process of regional innovation resources. This paper takes the Nanjing-Zhenjiang G312 Industrial Innovation Corridor, which involves seven municipal districts in Jiangsu Province, as empirical evidence. Based on multi-source spatial big data in 2010, 2016, and 2022, this paper applies triangulated irregular network (TIN), head/tail breaks, regional innovation ecosystem (RIE) niche fitness evaluation model, and social network analysis to carry out empirical research on the spatial organization and functional structural evolution characteristics of innovation corridors and their correlation with the structural evolution of collaborative innovation network. The results show, first, the development of innovation patches in the corridor has fractal characteristics in time and space and tends to be multi-center and cluster layout along the Nanjing Bypass Highway and National Highway G312. Second, there are large differences in the spatial distribution pattern of niche fitness in the corridor in various dimensions, and the niche fitness of innovation patches along the highway has increased significantly. Third, the scale of the collaborative innovation network in the corridor is expanding fast. The core of the network is shifting from the main urban area to the periphery of the city along the highway, with small-world and hierarchical levels, and the core-edge network structure is highlighted. With the development of the Innovation Corridor, the main collaborative mode in the corridor is changing from collaboration within innovation patches to collaboration between innovation patches, and innovation patches with high ecological suitability tend to be the active areas of collaborative innovation. Overall, polycentric spatial layout, graded functional structure, diversified innovation clusters, and differentiated environmental support play an important role in effectively constructing collaborative innovation linkages and the stable expansion of the scale of collaborative innovation within the innovation corridor.Keywords: innovation corridor development, spatial structure, niche fitness evaluation model, head/tail breaks, innovation network
Procedia PDF Downloads 187371 Router 1X3 - RTL Design and Verification
Authors: Nidhi Gopal
Abstract:
Routing is the process of moving a packet of data from source to destination and enables messages to pass from one computer to another and eventually reach the target machine. A router is a networking device that forwards data packets between computer networks. It is connected to two or more data lines from different networks (as opposed to a network switch, which connects data lines from one single network). This paper mainly emphasizes upon the study of router device, its top level architecture, and how various sub-modules of router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top module.Keywords: data packets, networking, router, routing
Procedia PDF Downloads 8117370 Social Media, Networks and Related Technology: Business and Governance Perspectives
Authors: M. A. T. AlSudairi, T. G. K. Vasista
Abstract:
The concept of social media is becoming the top of the agenda for many business executives and public sector executives today. Decision makers as well as consultants, try to identify ways in which firms and enterprises can make profitable use of social media and network related applications such as Wikipedia, Face book, YouTube, Google+, Twitter. While it is fun and useful to participating in this media and network for achieving the communication effectively and efficiently, semantic and sentiment analysis and interpretation becomes a crucial issue. So, the objective of this paper is to provide literature review on social media, network and related technology related to semantics and sentiment or opinion analysis covering business and governance perspectives. In this regard, a case study on the use and adoption of Social media in Saudi Arabia has been discussed. It is concluded that semantic web technology play a significant role in analyzing the social networks and social media content for extracting the interpretational knowledge towards strategic decision support.Keywords: CRASP methodology, formative assessment, literature review, semantic web services, social media, social networks
Procedia PDF Downloads 4507369 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls
Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin
Abstract:
The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis
Procedia PDF Downloads 2147368 Selecting a Foreign Country to Build a Naval Base Using a Fuzzy Hybrid Decision Support System
Authors: Latif Yanar, Muammer Kaçan
Abstract:
Decision support systems are getting more important in many fields of science and technology and used effectively especially when the problems to be solved are complicated with many criteria. In this kind of problems one of the main challenges for the decision makers are that sometimes they cannot produce a countable data for evaluating the criteria but the knowledge and sense of experts. In recent years, fuzzy set theory and fuzzy logic based decision models gaining more place in literature. In this study, a decision support model to determine a country to build naval base is proposed and the application of the model is performed, considering Turkish Navy by the evaluations of Turkish Navy officers and academicians of international relations departments of various Universities located in Istanbul. The results achieved from the evaluations made by the experts in our model are calculated by a decision support tool named DESTEC 1.0, which is developed by the authors using C Sharp programming language. The tool gives advices to the decision maker using Analytic Hierarchy Process, Analytic Network Process, Fuzzy Analytic Hierarchy Process and Fuzzy Analytic Network Process all at once. The calculated results for five foreign countries are shown in the conclusion.Keywords: decision support system, analytic hierarchy process, fuzzy analytic hierarchy process, analytic network process, fuzzy analytic network process, naval base, country selection, international relations
Procedia PDF Downloads 5907367 Analysis of Non-Conventional Roundabout Performance in Mixed Traffic Conditions
Authors: Guneet Saini, Shahrukh, Sunil Sharma
Abstract:
Traffic congestion is the most critical issue faced by those in the transportation profession today. Over the past few years, roundabouts have been recognized as a measure to promote efficiency at intersections globally. In developing countries like India, this type of intersection still faces a lot of issues, such as bottleneck situations, long queues and increased waiting times, due to increasing traffic which in turn affect the performance of the entire urban network. This research is a case study of a non-conventional roundabout, in terms of geometric design, in a small town in India. These types of roundabouts should be analyzed for their functionality in mixed traffic conditions, prevalent in many developing countries. Microscopic traffic simulation is an effective tool to analyze traffic conditions and estimate various measures of operational performance of intersections such as capacity, vehicle delay, queue length and Level of Service (LOS) of urban roadway network. This study involves analyzation of an unsymmetrical non-circular 6-legged roundabout known as “Kala Aam Chauraha” in a small town Bulandshahr in Uttar Pradesh, India using VISSIM simulation package which is the most widely used software for microscopic traffic simulation. For coding in VISSIM, data are collected from the site during morning and evening peak hours of a weekday and then analyzed for base model building. The model is calibrated on driving behavior and vehicle parameters and an optimal set of calibrated parameters is obtained followed by validation of the model to obtain the base model which can replicate the real field conditions. This calibrated and validated model is then used to analyze the prevailing operational traffic performance of the roundabout which is then compared with a proposed alternative to improve efficiency of roundabout network and to accommodate pedestrians in the geometry. The study results show that the alternative proposed is an advantage over the present roundabout as it considerably reduces congestion, vehicle delay and queue length and hence, successfully improves roundabout performance without compromising on pedestrian safety. The study proposes similar designs for modification of existing non-conventional roundabouts experiencing excessive delays and queues in order to improve their efficiency especially in the case of developing countries. From this study, it can be concluded that there is a need to improve the current geometry of such roundabouts to ensure better traffic performance and safety of drivers and pedestrians negotiating the intersection and hence this proposal may be considered as a best fit.Keywords: operational performance, roundabout, simulation, VISSIM
Procedia PDF Downloads 1387366 INNPT Nano Particles Material Technology as Enhancement Technology for Biological WWTP Performance and Capacity
Authors: Medhat Gad
Abstract:
Wastewater treatment became a big issue in this decade due to shortage of water resources, growth of population and modern live requirements. Reuse of treated wastewater in industrial and agriculture sectors has a big demand to substitute the shortage of clean water supply as well as to save the eco system from dangerous pollutants in insufficient treated wastewater In last decades, most of wastewater treatment plants are built using primary or secondary biological treatment technology which almost does not provide enough treatment and removal of phosphorus and nitrogen. those plants which built ten to 15 years ago also now suffering from overflow which decrease the treatment efficiency of the plant. Discharging treated wastewater which contains phosphorus and nitrogen to water reservoirs and irrigation canals destroy ecosystem and aquatic life. Using chemical material to enhance treatment efficiency for domestic wastewater but it leads to huge amount of sludge which cost a lot of money. To enhance wastewater treatment, we used INNPT nano material which consists of calcium, aluminum and iron oxides and compounds plus silica, sodium and magnesium. INNPT nano material used with a dose of 100 mg/l to upgrade SBR treatment plant in Cairo Egypt -which has three treatment tanks each with a capacity of 2500 cubic meters per day - to tertiary treatment level by removing Phosphorus, Nitrogen and increase dissolved oxygen in final effluent. The results showed that the treatment retention time decreased from 9 hours in SBR system to one hour using INNPT nano material with improvement in effluent quality while increasing plant capacity to 20 k cubic meters per day. Nitrogen removal efficiency achieved 77%, while phosphorus removal efficiency achieved 90% and COD removal efficiency was 93% which all comply with tertiary treatment limits according to Egyptian law.Keywords: INNPT technology, nanomaterial, tertiary wastewater treatment, capacity extending
Procedia PDF Downloads 1627365 Optimization of Bills Assignment to Different Skill-Levels of Data Entry Operators in a Business Process Outsourcing Industry
Authors: M. S. Maglasang, S. O. Palacio, L. P. Ogdoc
Abstract:
Business Process Outsourcing has been one of the fastest growing and emerging industry in the Philippines today. Unlike most of the contact service centers, more popularly known as "call centers", The BPO Industry’s primary outsourced service is performing audits of the global clients' logistics. As a service industry, manpower is considered as the most important yet the most expensive resource in the company. Because of this, there is a need to maximize the human resources so people are effectively and efficiently utilized. The main purpose of the study is to optimize the current manpower resources through effective distribution and assignment of different types of bills to the different skill-level of data entry operators. The assignment model parameters include the average observed time matrix gathered from through time study, which incorporates the learning curve concept. Subsequently, a simulation model was made to duplicate the arrival rate of demand which includes the different batches and types of bill per day. Next, a mathematical linear programming model was formulated. Its objective is to minimize direct labor cost per bill by allocating the different types of bills to the different skill-levels of operators. Finally, a hypothesis test was done to validate the model, comparing the actual and simulated results. The analysis of results revealed that the there’s low utilization of effective capacity because of its failure to determine the product-mix, skill-mix, and simulated demand as model parameters. Moreover, failure to consider the effects of learning curve leads to overestimation of labor needs. From 107 current number of operators, the proposed model gives a result of 79 operators. This results to an increase of utilization of effective capacity to 14.94%. It is recommended that the excess 28 operators would be reallocated to the other areas of the department. Finally, a manpower capacity planning model is also recommended in support to management’s decisions on what to do when the current capacity would reach its limit with the expected increasing demand.Keywords: optimization modelling, linear programming, simulation, time and motion study, capacity planning
Procedia PDF Downloads 5187364 Adsorption of Phenol and 4-Hydroxybenzoic Acid onto Functional Materials
Authors: Mourad Makhlouf, Omar Bouchher, Messabih Sidi Mohamed, Benrachedi Khaled
Abstract:
The objective of this study was to investigate the removal of two organic pollutants; 4-hydroxybenzoic acid (p-hydroxybenzoic acid) and phenol from synthetic wastewater by the adsorption on mesoporous materials. In this context, the aim of this work is to study the adsorption of organic compounds phenol and 4AHB on MCM-41 and FSM-16 non-grafted (NG) and other grafted (G) by trimethylchlorosilane (TMCS). The results of phenol and 4AHB adsorption in aqueous solution show that the adsorption capacity tends to increase after grafting in relation to the increase in hydrophobicity. The materials are distinguished by a higher adsorption capacity to the other NG materials. The difference in the phenol is 14.43% (MCM-41), 14.55% (FSM-16), and 16.72% (MCM-41), 13.57% (FSM-16) in the 4AHB. Our adsorption results show that the grafted materials by TMCS are good adsorbent at 25 °C.Keywords: MCM-41, FSM-16, TMCS, phenol, 4AHB
Procedia PDF Downloads 2727363 Tabu Search to Draw Evacuation Plans in Emergency Situations
Authors: S. Nasri, H. Bouziri
Abstract:
Disasters are quite experienced in our days. They are caused by floods, landslides, and building fires that is the main objective of this study. To cope with these unexpected events, precautions must be taken to protect human lives. The emphasis on disposal work focuses on the resolution of the evacuation problem in case of no-notice disaster. The problem of evacuation is listed as a dynamic network flow problem. Particularly, we model the evacuation problem as an earliest arrival flow problem with load dependent transit time. This problem is classified as NP-Hard. Our challenge here is to propose a metaheuristic solution for solving the evacuation problem. We define our objective as the maximization of evacuees during earliest periods of a time horizon T. The objective provides the evacuation of persons as soon as possible. We performed an experimental study on emergency evacuation from the tunisian children’s hospital. This work prompts us to look for evacuation plans corresponding to several situations where the network dynamically changes.Keywords: dynamic network flow, load dependent transit time, evacuation strategy, earliest arrival flow problem, tabu search metaheuristic
Procedia PDF Downloads 3717362 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers
Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang
Abstract:
In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.Keywords: centrality, patent coupling network, patent influence, social network analysis
Procedia PDF Downloads 527361 Detection and Classification of Rubber Tree Leaf Diseases Using Machine Learning
Authors: Kavyadevi N., Kaviya G., Gowsalya P., Janani M., Mohanraj S.
Abstract:
Hevea brasiliensis, also known as the rubber tree, is one of the foremost assets of crops in the world. One of the most significant advantages of the Rubber Plant in terms of air oxygenation is its capacity to reduce the likelihood of an individual developing respiratory allergies like asthma. To construct such a system that can properly identify crop diseases and pests and then create a database of insecticides for each pest and disease, we must first give treatment for the illness that has been detected. We shall primarily examine three major leaf diseases since they are economically deficient in this article, which is Bird's eye spot, algal spot and powdery mildew. And the recommended work focuses on disease identification on rubber tree leaves. It will be accomplished by employing one of the superior algorithms. Input, Preprocessing, Image Segmentation, Extraction Feature, and Classification will be followed by the processing technique. We will use time-consuming procedures that they use to detect the sickness. As a consequence, the main ailments, underlying causes, and signs and symptoms of diseases that harm the rubber tree are covered in this study.Keywords: image processing, python, convolution neural network (CNN), machine learning
Procedia PDF Downloads 767360 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks
Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah
Abstract:
Ant colony based routing algorithms are known to grantee the packet delivery, but they suffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.Keywords: ad-hoc network, MANET, ant colony routing, position based routing
Procedia PDF Downloads 4247359 Omni-Relay (OR) Scheme-Aided LTE-A Communication Systems
Authors: Hassan Mahasneh, Abu Sesay
Abstract:
We propose the use of relay terminals at the cell edge of an LTE-based cellar system. Each relay terminal is equipped with an omni-directional antenna. We refer to this scheme as the Omni-Relay (OR) scheme. The OR scheme coordinates the inter-cell interference (ICI) stemming from adjacent cells and increases the desired signal level at cell-edge regions. To validate the performance of the OR scheme, we derive the average signal-to-interference plus noise ratio (SINR) and the average capacity and compare it with the conventional universal frequency reuse factor (UFRF). The results show that the proposed OR scheme provides higher average SINR and average capacity compared to the UFRF due to the assistance of the distributed relay nodes.Keywords: the UFRF scheme, the OR scheme, ICI, relay terminals, SINR, spectral efficiency
Procedia PDF Downloads 3397358 A Brief Review of the Axial Capacity of Circular High Strength CFST Columns
Authors: Fuat Korkut, Soner Guler
Abstract:
The concrete filled steel tube (CFST) columns are commonly used in construction applications such as high-rise buildings and bridges owing to its lots of remarkable benefits. The use of concrete filled steel tube columns provides large areas by reduction in cross-sectional area of columns. The main aim of this study is to examine the axial load capacities of circular high strength concrete filled steel tube columns according to Eurocode 4 (EC4) and Chinese Code (DL/T). The results showed that the predictions of EC4 and Chinese Code DL/T are unsafe for all specimens.Keywords: concrete-filled steel tube column, axial load capacity, Chinese code, Australian Standard
Procedia PDF Downloads 5047357 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures
Authors: Jungyeol Hong, Dongjoo Park
Abstract:
The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership
Procedia PDF Downloads 1777356 Adsorbent Removal of Oil Spills Using Bentonite Clay
Authors: Saad Mohamed Elsaid Abdelrahman
Abstract:
The adsorption method is one of the best modern techniques used in removing pollutants, especially organic hydrocarbon compounds, from polluted water. Through this research, bentonite clay can be used to remove organic hydrocarbon compounds, such as heptane and octane, resulting from oil spills in seawater. Bentonite clay can be obtained from the Kholayaz area, located north of Jeddah, at a distance of 80 km. Chemical analysis shows that bentonite clay consists of a mixture of silica, alumina and oxides of some elements. Bentonite clay can be activated in order to raise its adsorption efficiency and to make it suitable for removing pollutants using an ionic organic solvent. It is necessary to study some of the factors that could be in the efficiency of bentonite clay in removing oily organic compounds, such as the time of contact of the clay with heptane and octane solutions, pH and temperature, in order to reach the highest adsorption capacity of bentonite clay. The temperature can be a few degrees Celsius higher. The adsorption capacity of the clay decreases when the temperature is raised more than 4°C to reach its lowest value at the temperature of 50°C. The results show that the friction time of 30 minutes and the pH of 6.8 is the best conditions to obtain the highest adsorption capacity of the clay, 467 mg in the case of heptane and 385 mg in the case of octane compound. Experiments conducted on bentonite clay were encouraging to select it to remove heavy molecular weight pollutants such as petroleum compounds under study.Keywords: adsorbent, bentonite clay, oil spills, removal
Procedia PDF Downloads 867355 Lapped Gussets Joints in Compression
Authors: K. R. Tshunza, A. Elvin, A. Gabremmeskel
Abstract:
Final results of an extensive laboratory research program on “lapped gusset joints in compression” are presented. The investigation was carried out at the Heavy structures laboratory at the University of the Witwatersrand in Johannesburg, South Africa. A proposed, relatively easy to use analytical equation was found to be reasonably adequate in determining the global compressive capacity of lapped gussets joints under compressive load. A wide range of lapped mild steel plates of varying slenderness, welded on 219*10 and 127*6 Mild steel circular hollow sections of 1m length were tested in compression and the formula was validated with experimental results. The investigation show that the connection’s capacity is controlled by flexure due to the eccentricity between the plates that are connected side to side.Keywords: compression, eccentricity, lapped gussets joints, moment resistance
Procedia PDF Downloads 3077354 Predicting Indonesia External Debt Crisis: An Artificial Neural Network Approach
Authors: Riznaldi Akbar
Abstract:
In this study, we compared the performance of the Artificial Neural Network (ANN) model with back-propagation algorithm in correctly predicting in-sample and out-of-sample external debt crisis in Indonesia. We found that exchange rate, foreign reserves, and exports are the major determinants to experiencing external debt crisis. The ANN in-sample performance provides relatively superior results. The ANN model is able to classify correctly crisis of 89.12 per cent with reasonably low false alarms of 7.01 per cent. In out-of-sample, the prediction performance fairly deteriorates compared to their in-sample performances. It could be explained as the ANN model tends to over-fit the data in the in-sample, but it could not fit the out-of-sample very well. The 10-fold cross-validation has been used to improve the out-of-sample prediction accuracy. The results also offer policy implications. The out-of-sample performance could be very sensitive to the size of the samples, as it could yield a higher total misclassification error and lower prediction accuracy. The ANN model could be used to identify past crisis episodes with some accuracy, but predicting crisis outside the estimation sample is much more challenging because of the presence of uncertainty.Keywords: debt crisis, external debt, artificial neural network, ANN
Procedia PDF Downloads 4377353 An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform
Abstract:
Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints.Keywords: alexNet, VGG, googleNet, resNet, Jetson nano, CUDA, COCO-NET, cifar10, imageNet large scale visual recognition challenge (ILSVRC), google colab
Procedia PDF Downloads 897352 DNA Multiplier: A Design Architecture of a Multiplier Circuit Using DNA Molecules
Authors: Hafiz Md. Hasan Babu, Khandaker Mohammad Mohi Uddin, Nitish Biswas, Sarreha Tasmin Rikta, Nuzmul Hossain Nahid
Abstract:
Nanomedicine and bioengineering use biological systems that can perform computing operations. In a biocomputational circuit, different types of biomolecules and DNA (Deoxyribose Nucleic Acid) are used as active components. DNA computing has the capability of performing parallel processing and a large storage capacity that makes it diverse from other computing systems. In most processors, the multiplier is treated as a core hardware block, and multiplication is one of the time-consuming and lengthy tasks. In this paper, cost-effective DNA multipliers are designed using algorithms of molecular DNA operations with respect to conventional ones. The speed and storage capacity of a DNA multiplier are also much higher than a traditional silicon-based multiplier.Keywords: biological systems, DNA multiplier, large storage, parallel processing
Procedia PDF Downloads 2127351 Adsorption of Phosphate from Aqueous Solution Using Filter Cake for Urban Wastewater Treatment
Authors: Girmaye Abebe, Brook Lemma
Abstract:
Adsorption of phosphorus (P as PO43-) in filter cake was studied to assess the media's capability in removing phosphorous from wastewaters. The composition of the filter cake that was generated from alum manufacturing process as waste residue has high amount of silicate from the complete silicate analysis of the experiment. Series of batches adsorption experiments were carried out to evaluate parameters that influence the adsorption capacity of PO43-. The factors studied include the effect of contact time, adsorbent dose, thermal pretreatment of the adsorbent, neutralization of the adsorbent, initial PO43- concentration, pH of the solution and effect of co-existing anions. Results showed that adsorption of PO43- is fairly rapid in first 5 min and after that it increases slowly to reach the equilibrium in about 1 h. The treatment efficiency of PO43- was increased with adsorbent extent. About 90% removal efficiency was increased within 1 h at an optimum adsorbent dose of 10 g/L for initial PO43- concentration of 10 mg/L. The amount of PO43- adsorbed increased with increasing initial PO43- concentration. Heat treatment and surface neutralization of the adsorbent did not improve the PO43- removal capacity and efficiency. The percentage of PO43- removal remains nearly constant within the pH range of 3-8. The adsorption data at ambient pH were well fitted to the Langmuir Isotherm and Dubinin–Radushkevick (D–R) isotherm model with a capacity of 25.84 and 157.55 mg/g of the adsorbent respectively. The adsorption kinetic was found to follow a pseudo-second-order rate equation with an average rate constant of 3.76 g.min−1.mg−1. The presence of bicarbonate or carbonate at higher concentrations (10–1000 mg/L) decreased the PO43- removal efficiency slightly while other anions (Cl-, SO42-, and NO3-) have no significant effect within the concentration range tested. The overall result shows that the filter cake is an efficient PO43- removing adsorbent against many parameters.Keywords: wastewater, filter cake, adsorption capacity, phosphate (PO43-)
Procedia PDF Downloads 230