Search results for: distributed memory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3060

Search results for: distributed memory

2460 Performance Comparison of Droop Control Methods for Parallel Inverters in Microgrid

Authors: Ahmed Ismail, Mustafa Baysal

Abstract:

Although the energy source in the world is mainly based on fossil fuels today, there is a need for alternative energy generation systems, which are more economic and environmentally friendly, due to continuously increasing demand of electric energy and lacking power resources and networks. Distributed Energy Resources (DERs) such as fuel cells, wind and solar power have recently become widespread as alternative generation. In order to solve several problems that might be encountered when integrating DERs to power system, the microgrid concept has been proposed. A microgrid can operate both grid connected and island mode to benefit both utility and customers. For most distributed energy resources (DER) which are connected in parallel in LV-grid like micro-turbines, wind plants, fuel cells and PV cells electrical power is generated as a direct current (DC) and converted to an alternative currents (AC) by inverters. So the inverters are assumed to be primary components in a microgrid. There are many control techniques of parallel inverters to manage active and reactive sharing of the loads. Some of them are based on droop method. In literature, the studies are usually focused on improving the transient performance of inverters. In this study, the performance of two different controllers based on droop control method is compared for the inverters operated in parallel without any communication feedback. For this aim, a microgrid in which inverters are controlled by conventional droop controller and modified droop controller is designed. Modified controller is obtained by adding PID into conventional droop control. Active and reactive power sharing performance, voltage and frequency responses of those control methods are measured in several operational cases. Study cases have been simulated by MATLAB-SIMULINK.

Keywords: active and reactive power sharing, distributed generation, droop control, microgrid

Procedia PDF Downloads 581
2459 Revising Australia’s Collective Memory through Post-Colonial Storytelling

Authors: Linda Jane Wells

Abstract:

In 1914 Topsy Smith, a woman of the First Nations Arabana tribe arrived in Alice Springs with her seven children and a herd of goats. They had come in from the goldfields at Arltunga where they had been living, and Topsy’s husband, Welsh-born Bill Smith, had recently died. Sergeant Stott, the local policeman and sub-protector of Aborigines for the region, erected a tin shed for Topsy and the children to live in, which became known as the Bungalow for half-castes. Over the years that followed many more children of mixed descent were removed from their families and brought to live at the Bungalow until, a decade later, sixty children were growing up there, cared for predominantly by Topsy Smith; Ida Standley who was the first, white schoolteacher for the town; and Sergeant Stott. The story of the Bungalow is pivotal to the foundations of social relations in the town of Alice Springs and beyond. At the same time, it is little known, recognised or understood locally, let alone more broadly. This is typical of the dominant historic narratives that have emerged out of the Australian colonial project and led to ‘the Great Australian Silence.’ The term was coined by Australian anthropologist WEH Stanner in his 1968 Boyer Lectures, in reference to the omission of the Aboriginal experience from the dominant narratives of the nation’s history. In his lecture, he attributed this silence to something that may have begun as a simple forgetting of other possible views which turned, under habit and over time, into something like a cult of forgetfulness practised on a national scale. This doctoral project, underpinned by a methodology of practice-led research, engages a bricolage of methods including archival research, ethnography, and oral histories to research the bungalow and the context in which it operated. Techniques of fictocriticism, speculative biography, autoethnography, and archival poetics are then engaged to write the research outcomes into a post-colonial, multi-genre work of creative non-fiction that speaks into the silences in the archives. The overall intent of this doctoral work is to explore and demonstrate how techniques of creative non-fiction can be used to rewrite narratives of Australian colonial history that resonate beyond the academy, thus contributing to the bank of post-colonial stories and working towards a more just, honest and inclusive national ‘memory’ and identity.

Keywords: Australian history, collective memory, creative non-fiction, postcolonialism

Procedia PDF Downloads 101
2458 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 111
2457 Optimal Placement and Sizing of Energy Storage System in Distribution Network with Photovoltaic Based Distributed Generation Using Improved Firefly Algorithms

Authors: Ling Ai Wong, Hussain Shareef, Azah Mohamed, Ahmad Asrul Ibrahim

Abstract:

The installation of photovoltaic based distributed generation (PVDG) in active distribution system can lead to voltage fluctuation due to the intermittent and unpredictable PVDG output power. This paper presented a method in mitigating the voltage rise by optimally locating and sizing the battery energy storage system (BESS) in PVDG integrated distribution network. The improved firefly algorithm is used to perform optimal placement and sizing. Three objective functions are presented considering the voltage deviation and BESS off-time with state of charge as the constraint. The performance of the proposed method is compared with another optimization method such as the original firefly algorithm and gravitational search algorithm. Simulation results show that the proposed optimum BESS location and size improve the voltage stability.

Keywords: BESS, firefly algorithm, PVDG, voltage fluctuation

Procedia PDF Downloads 314
2456 Vulnerable Paths Assessment for Distributed Denial of Service Attacks in a Cloud Computing Environment

Authors: Manas Tripathi, Arunabha Mukhopadhyay

Abstract:

In Cloud computing environment, cloud servers, sometimes may crash after receiving huge amount of request and cloud services may stop which can create huge loss to users of that cloud services. This situation is called Denial of Service (DoS) attack. In Distributed Denial of Service (DDoS) attack, an attacker targets multiple network paths by compromising various vulnerable systems (zombies) and floods the victim with huge amount of request through these zombies. There are many solutions to mitigate this challenge but most of the methods allows the attack traffic to arrive at Cloud Service Provider (CSP) and then only takes actions against mitigation. Here in this paper we are rather focusing on preventive mechanism to deal with these attacks. We analyze network topology and find most vulnerable paths beforehand without waiting for the traffic to arrive at CSP. We have used Dijkstra's and Yen’s algorithm. Finally, risk assessment of these paths can be done by multiplying the probabilities of attack for these paths with the potential loss.

Keywords: cloud computing, DDoS, Dijkstra, Yen’s k-shortest path, network security

Procedia PDF Downloads 269
2455 Solid Waste Management Challenges and Possible Solution in Kabul City

Authors: Ghulam Haider Haidaree, Nsenda Lukumwena

Abstract:

Most developing nations face energy production and supply problems. This is also the case of Afghanistan whose generating capacity does not meet its energy demand. This is due in part to high security and risk caused by war which deters foreign investments and insufficient internal revenue. To address the issue above, this paper would like to suggest an alternative and affordable way to deal with the energy problem. That is by converting Solid Waste to energy. As a result, this approach tackles the municipal solid waste issue (potential cause of several diseases), contributes to the improvement of the quality of life, local economy, and so on. While addressing the solid waste problem in general, this paper samples specifically one municipality which is District-12, one of the 22 districts of Kabul city. Using geographic information system (GIS) technology, District-12 is divided into nine different zones whose municipal solid waste is respectively collected, processed, and converted into electricity and distributed to the closest area. It is important to mention that GIS has been used to estimate the amount of electricity to be distributed and to optimally position the production plant.

Keywords: energy problem, estimation of electricity, GIS zones, solid waste management system

Procedia PDF Downloads 324
2454 Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future

Authors: Gabriel Wainer

Abstract:

Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.

Keywords: modeling and simulation, discrete-event simulation, hybrid systems modeling, parallel and distributed simulation

Procedia PDF Downloads 311
2453 Research and Implementation of Cross-domain Data Sharing System in Net-centric Environment

Authors: Xiaoqing Wang, Jianjian Zong, Li Li, Yanxing Zheng, Jinrong Tong, Mao Zhan

Abstract:

With the rapid development of network and communication technology, a great deal of data has been generated in different domains of a network. These data show a trend of increasing scale and more complex structure. Therefore, an effective and flexible cross-domain data-sharing system is needed. The Cross-domain Data Sharing System(CDSS) in a net-centric environment is composed of three sub-systems. The data distribution sub-system provides data exchange service through publish-subscribe technology that supports asynchronism and multi-to-multi communication, which adapts to the needs of the dynamic and large-scale distributed computing environment. The access control sub-system adopts Attribute-Based Access Control(ABAC) technology to uniformly model various data attributes such as subject, object, permission and environment, which effectively monitors the activities of users accessing resources and ensures that legitimate users get effective access control rights within a legal time. The cross-domain access security negotiation subsystem automatically determines the access rights between different security domains in the process of interactive disclosure of digital certificates and access control policies through trust policy management and negotiation algorithms, which provides an effective means for cross-domain trust relationship establishment and access control in a distributed environment. The CDSS’s asynchronous,multi-to-multi and loosely-coupled communication features can adapt well to data exchange and sharing in dynamic, distributed and large-scale network environments. Next, we will give CDSS new features to support the mobile computing environment.

Keywords: data sharing, cross-domain, data exchange, publish-subscribe

Procedia PDF Downloads 115
2452 Mechanical and Physical Properties of Aluminum Composite Reinforced with Carbon Nano Tube Dispersion via Ultrasonic and Ball Mill Attrition after Sever Plastic Deformation

Authors: Hassan Zare, Mohammad Jahedi, Mohammad Reza Toroghinejad, Mahmoud Meratian, Marko Knezevic

Abstract:

In this study, the carbon nanotube (CNT) reinforced Al matrix nanocomposites were fabricated by ECAP. Equal Channel Angular Pressing (ECAP) process is one of the most important methods for powder densification due to the presence of shear strain. This method samples with variety passes (one, two, four and eight passes) in C route were prepared at room temperature. A few study about metal matrix nanocomposite reinforced carbon nanotube done, the reaction intersection of interface and carbon nanotube cause to reduce the efficiency of nanocomposite. In this paper, we checked mechanical and physical properties of aluminum-CNT composite that manufactured by ECAP when the composite is deformed. The non-agglomerated CNTs were distributed homogeneously with 2% consolidation in the Aluminum matrix. The ECAP process was performed on the both monolithic and composite with distributed CNT samples for 8 passes.

Keywords: powder metallurgy, ball mill attrition, ultrasonic, consolidation

Procedia PDF Downloads 482
2451 Neuroplasticity in Language Acquisition in English as Foreign Language Classrooms

Authors: Sabitha Rahim

Abstract:

In the context of teaching vocabulary of English as Foreign Language (EFL), the confluence of memory and retention is one of the most significant factors in students' language acquisition. The progress of students engaged in foreign language acquisition is often stymied by vocabulary attrition, which leads to learners' lack of confidence and motivation. However, among other factors, little research has investigated the importance of neuroplasticity in Foreign Language acquisition and how underused neural pathways lead to the loss of plasticity, thereby affecting the learners’ vocabulary retention and motivation. This research explored the effect of enhancing vocabulary acquisition of EFL students in the Foundation Year at King Abdulaziz University through various methods and neuroplasticity exercises that reinforced their attention, motivation, and engagement. It analyzed the results to determine if stimulating the brain of EFL learners by various physical and mental activities led to the improvement in short and long term memory in vocabulary retention. The main data collection methods were student surveys, assessment records of teachers, student achievement test results, and students' follow-up interviews. A key implication of this research is for the institutions to consider having multiple varieties of student activities promoting brain plasticity within the classrooms as an effective tool for foreign language acquisition. Building awareness among the faculty and adapting the curriculum to include activities that promote brain plasticity ensures an enhanced learning environment and effective language acquisition in EFL classrooms.

Keywords: language acquisition, neural paths, neuroplasticity, vocabulary attrition

Procedia PDF Downloads 159
2450 Interactive IoT-Blockchain System for Big Data Processing

Authors: Abdallah Al-ZoubI, Mamoun Dmour

Abstract:

The spectrum of IoT devices is becoming widely diversified, entering almost all possible fields and finding applications in industry, health, finance, logistics, education, to name a few. The IoT active endpoint sensors and devices exceeded the 12 billion mark in 2021 and are expected to reach 27 billion in 2025, with over $34 billion in total market value. This sheer rise in numbers and use of IoT devices bring with it considerable concerns regarding data storage, analysis, manipulation and protection. IoT Blockchain-based systems have recently been proposed as a decentralized solution for large-scale data storage and protection. COVID-19 has actually accelerated the desire to utilize IoT devices as it impacted both demand and supply and significantly affected several regions due to logistic reasons such as supply chain interruptions, shortage of shipping containers and port congestion. An IoT-blockchain system is proposed to handle big data generated by a distributed network of sensors and controllers in an interactive manner. The system is designed using the Ethereum platform, which utilizes smart contracts, programmed in solidity to execute and manage data generated by IoT sensors and devices. such as Raspberry Pi 4, Rasbpian, and add-on hardware security modules. The proposed system will run a number of applications hosted by a local machine used to validate transactions. It then sends data to the rest of the network through InterPlanetary File System (IPFS) and Ethereum Swarm, forming a closed IoT ecosystem run by blockchain where a number of distributed IoT devices can communicate and interact, thus forming a closed, controlled environment. A prototype has been deployed with three IoT handling units distributed over a wide geographical space in order to examine its feasibility, performance and costs. Initial results indicated that big IoT data retrieval and storage is feasible and interactivity is possible, provided that certain conditions of cost, speed and thorough put are met.

Keywords: IoT devices, blockchain, Ethereum, big data

Procedia PDF Downloads 136
2449 Urban Renewal from the Perspective of Industrial Heritage Protection: Taking the Qiaokou District of Wuhan as an Example

Authors: Yue Sun, Yuan Wang

Abstract:

Most of the earliest national industries in Wuhan are located along the Hanjiang River, and Qiaokou is considered to be a gathering place for Dahankou old industrial base. Zongguan Waterworks, Pacific Soap Factory, Fuxin Flour Factory, Nanyang Tobacco Factory and other hundred-year-old factories are located along Hanjiang River in Qiaokou District, especially the Gutian Industrial Zone, which was listed as one of 156 national restoration projects at the beginning of the founding of the People’s Republic of China. After decades of development, Qiaokou has become the gathering place of the chemical industry and secondary industry, causing damage to the city and serious pollution, becoming a marginalized area forgotten by the central city. In recent years, with the accelerated pace of urban renewal, Qiaokou has been constantly reforming and innovating, and has begun drastic changes in the transformation of old cities and the development of new districts. These factories have been listed as key reconstruction projects, and a large number of industrial heritage with historical value and full urban memory have been relocated, demolished and reformed, with only a few factory buildings preserved. Through the methods of industrial archaeology, image analysis, typology and field investigation, this paper analyzes and summarizes the spatial characteristics of industrial heritage in Qiaokou District, explores urban renewal from the perspective of industrial heritage protection, and provides design strategies for the regeneration of urban industrial sites and industrial heritage.

Keywords: industrial heritage, urban renewal, protection, urban memory

Procedia PDF Downloads 136
2448 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 58
2447 Energy System Analysis Using Data-Driven Modelling and Bayesian Methods

Authors: Paul Rowley, Adam Thirkill, Nick Doylend, Philip Leicester, Becky Gough

Abstract:

The dynamic performance of all energy generation technologies is impacted to varying degrees by the stochastic properties of the wider system within which the generation technology is located. This stochasticity can include the varying nature of ambient renewable energy resources such as wind or solar radiation, or unpredicted changes in energy demand which impact upon the operational behaviour of thermal generation technologies. An understanding of these stochastic impacts are especially important in contexts such as highly distributed (or embedded) generation, where an understanding of issues affecting the individual or aggregated performance of high numbers of relatively small generators is especially important, such as in ESCO projects. Probabilistic evaluation of monitored or simulated performance data is one technique which can provide an insight into the dynamic performance characteristics of generating systems, both in a prognostic sense (such as the prediction of future performance at the project’s design stage) as well as in a diagnostic sense (such as in the real-time analysis of underperforming systems). In this work, we describe the development, application and outcomes of a new approach to the acquisition of datasets suitable for use in the subsequent performance and impact analysis (including the use of Bayesian approaches) for a number of distributed generation technologies. The application of the approach is illustrated using a number of case studies involving domestic and small commercial scale photovoltaic, solar thermal and natural gas boiler installations, and the results as presented show that the methodology offers significant advantages in terms of plant efficiency prediction or diagnosis, along with allied environmental and social impacts such as greenhouse gas emission reduction or fuel affordability.

Keywords: renewable energy, dynamic performance simulation, Bayesian analysis, distributed generation

Procedia PDF Downloads 487
2446 Cloud Computing in Data Mining: A Technical Survey

Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham

Abstract:

Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.

Keywords: cloud computing, data mining, computing models, cloud services

Procedia PDF Downloads 463
2445 Advancing Our Understanding of Age-Related Changes in Executive Functions: Insights from Neuroimaging, Genetics and Cognitive Neurosciences

Authors: Yasaman Mohammadi

Abstract:

Executive functions are a critical component of goal-directed behavior, encompassing a diverse set of cognitive processes such as working memory, cognitive flexibility, and inhibitory control. These functions are known to decline with age, but the precise mechanisms underlying this decline remain unclear. This paper provides an in-depth review of recent research investigating age-related changes in executive functions, drawing on insights from neuroimaging, genetics, and cognitive neuroscience. Through an interdisciplinary approach, this paper offers a nuanced understanding of the complex interplay between neural mechanisms, genetic factors, and cognitive processes that contribute to executive function decline in aging. Here, we investigate how different neuroimaging methods, like functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), have helped scientists better understand the brain bases for age-related declines in executive function. Additionally, we discuss the role of genetic factors in mediating individual differences in executive functions across the lifespan, as well as the potential for cognitive interventions to mitigate age-related decline. Overall, this paper presents a comprehensive and integrative view of the current state of knowledge regarding age-related changes in executive functions. It underscores the need for continued interdisciplinary research to fully understand the complex and dynamic nature of executive function decline in aging, with the ultimate goal of developing effective interventions to promote healthy cognitive aging.

Keywords: executive functions, aging, neuroimaging, cognitive neuroscience, working memory, cognitive training

Procedia PDF Downloads 58
2444 Comparative Assessment of a Distributed Model and a Lumped Model for Estimating of Sediments Yielding in Small Urban Areas

Authors: J.Zambrano Nájera, M.Gómez Valentín

Abstract:

Increases in urbanization during XX century, have brought as one major problem the increased of sediment production. Hydraulic erosion is one of the major causes of increasing of sediments in small urban catchments. Such increments in sediment yielding in header urban catchments can caused obstruction of drainage systems, making impossible to capture urban runoff, increasing runoff volumes and thus exacerbating problems of urban flooding. For these reasons, it is more and more important to study of sediment production in urban watershed for properly analyze and solve problems associated to sediments. The study of sediments production has improved with the use of mathematical modeling. For that reason, it is proposed a new physically based model applicable to small header urban watersheds that includes the advantages of distributed physically base models, but with more realistic data requirements. Additionally, in this paper the model proposed is compared with a lumped model, reviewing the results, the advantages and disadvantages between the both of them.

Keywords: erosion, hydrologic modeling, urban runoff, sediment modeling, sediment yielding, urban planning

Procedia PDF Downloads 334
2443 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 54
2442 Transverse Vibration of Elastic Beam Resting on Variable Elastic Foundation Subjected to moving Load

Authors: Idowu Ibikunle Albert, Atilade Adesanya Oluwafemi, Okedeyi Abiodun Sikiru, Mustapha Rilwan Adewale

Abstract:

These present-day all areas of transport have experienced large advances characterized by increases in the speeds and weight of vehicles. As a result, this paper considered the Transverse Vibration of an Elastic Beam Resting on a Variable Elastic Foundation Subjected to a moving Load. The beam is presumed to be uniformly distributed and has simple support at both ends. The moving distributed moving mass is assumed to move with constant velocity. The governing equations, which are fourth-order partial differential equations, were reduced to second-order partial differential equations using an analytical method in terms of series solution and solved by a numerical method using mathematical software (Maple). Results show that an increase in the values of beam parameters, moving Mass M, and k-stiffness K, significantly reduces the deflection profile of the vibrating beam. In the results, it was equally found that moving mass is greater than moving force.

Keywords: elastic beam, moving load, response of structure, variable elastic foundation

Procedia PDF Downloads 110
2441 The Effect of Randomly Distributed Polypropylene Fibers and Some Additive Materials on Freezing-Thawing Durability of a Fine-Grained Soil

Authors: A. Şahin Zaimoglu

Abstract:

A number of studies have been conducted recently to investigate the influence of randomly oriented fibers on some engineering properties of cohesive and cohesionless soils. However, few studies have been carried out on freezing-thawing behavior of fine-grained soils modified with discrete fiber inclusions and additive materials. This experimental study was performed to investigate the effect of randomly distributed polypropylene fibers (PP) and some additive materials [e.g.., borogypsum (BG), fly ash (FA) and cement (C)] on freezing-thawing durability (mass losses) of a fine-grained soil for 6,12 and 18 cycles. The Taguchi method was applied to the experiments and a standard L9 orthogonal array (OA) with four factors and three levels were chosen. A series of freezing-thawing tests were conducted on each specimen. 0-20 % BG, 0-20 % FA, 0-0.25 % PP and 0-3 % of C by total dry weight of mixture were used in the preparation of specimens. Experimental results showed that the most effective materials for the freezing-thawing durability (mass losses) of the samples were borogypsum and fly ash. The values of mass losses for 6, 12 and 18 cycles in optimum conditions were 16.1%, 5.1% and 3.6%, respectively.

Keywords: freezing-thawing, additive materials, reinforced soil, optimization

Procedia PDF Downloads 296
2440 The Role of the Child's Previous Inventory in Verb Overgeneralization in Spanish Child Language: A Case Study

Authors: Mary Rosa Espinosa-Ochoa

Abstract:

The study of overgeneralization in inflectional morphology provides evidence for understanding how a child's mind works when applying linguistic patterns in a novel way. High-frequency inflectional forms in the input cause inappropriate use in contexts related to lower-frequency forms. Children learn verbs as lexical items and new forms develop only gradually, around their second year: most of the utterances that children produce are closely related to what they have previously produced. Spanish has a complex verbal system that inflects for person, mood, and tense. Approximately 200 verbs are irregular, and bare roots always require an inflected form, which represents a challenge for the memory. The aim of this research is to investigate i) what kinds of overgeneralization errors children make in verb production, ii) to what extent these errors are related to verb forms previously produced, and iii) whether the overgeneralized verb components are also frequent in children’s linguistic inventory. It consists of a high-density longitudinal study of a middle-class girl (1;11,24-2;02,24) from Mexico City, whose utterances were recorded almost daily for three months to compile a unique corpus in the Spanish language. Of the 358 types of inflected verbs produced by the child, 9.11% are overgeneralizations. Not only are inflected forms (verbal and pronominal clitics) overgeneralized, but also verbal roots. Each of the forms can be traced to previous utterances, and they show that the child is detecting morphological patterns. Neither verbal roots nor inflected forms are associated with high frequency patterns in her own speech. For example, the child alternates the bare roots of an irregular verb, cáye-te* and cáiga-te* (“fall down”), to express the imperative of the verb cá-e-te (fall down.IMPERATIVE-PRONOMINAL.CLITIC), although cay-ó (PAST.PERF.3SG) is the most frequent form of her previous complete inventory, and the combined frequency of caer (INF), cae (PRES.INDICATIVE.3SG), and caes (PRES.INDICATIVE.2SG) is the same as that of as caiga (PRES.SUBJ.1SG and 3SG). These results provide evidence that a) two forms of the same verb compete in the child’s memory, and b) although the child uses her own inventory to create new forms, these forms are not necessarily frequent in her memory storage, which means that her mind is more sensitive to external stimuli. Language acquisition is a developing process, given the sensitivity of the human mind to linguistic interaction with the outside world.

Keywords: inflection, morphology, child language acquisition, Spanish

Procedia PDF Downloads 93
2439 Obsession of Time and the New Musical Ontologies. The Concert for Saxophone, Daniel Kientzy and Orchestra by Myriam Marbe

Authors: Dutica Luminita

Abstract:

For the music composer Myriam Marbe the musical time and memory represent 2 (complementary) phenomena with conclusive impact on the settlement of new musical ontologies. Summarizing the most important achievements of the contemporary techniques of composition, her vision on the microform presented in The Concert for Daniel Kientzy, saxophone and orchestra transcends the linear and unidirectional time in favour of a flexible, multi-vectorial speech with spiral developments, where the sound substance is auto(re)generated by analogy with the fundamental processes of the memory. The conceptual model is of an archetypal essence, the music composer being concerned with identifying the mechanisms of the creation process, especially of those specific to the collective creation (of oral tradition). Hence the spontaneity of expression, improvisation tint, free rhythm, micro-interval intonation, coloristic-timbral universe dominated by multiphonics and unique sound effects. Hence the atmosphere of ritual, however purged by the primary connotations and reprojected into a wonderful spectacular space. The Concert is a work of artistic maturity and enforces respect, among others, by the timbral diversity of the three species of saxophone required by the music composer (baritone, sopranino and alt), in Part III Daniel Kientzy shows the performance of playing two saxophones concomitantly. The score of the music composer Myriam Marbe contains a deeply spiritualized music, full or archetypal symbols, a music whose drama suggests a real cinematographic movement.

Keywords: archetype, chronogenesis, concert, multiphonics

Procedia PDF Downloads 534
2438 Allium Cepa Extract Provides Neuroprotection Against Ischemia Reperfusion Induced Cognitive Dysfunction and Brain Damage in Mice

Authors: Jaspal Rana, Alkem Laboratories, Baddi, Himachal Pradesh, India Chitkara University, Punjab, India

Abstract:

Oxidative stress has been identified as an underlying cause of ischemia-reperfusion (IR) related cognitive dysfunction and brain damage. Therefore, antioxidant based therapies to treat IR injury are being investigated. Allium cepa L. (onion) is used as culinary medicine and is documented to have marked antioxidant effects. Hence, the present study was designed to evaluate the effect of A. cepa outer scale extract (ACE) against IR induced cognition and biochemical deficit in mice. ACE was prepared by maceration with 70% methanol and fractionated into ethylacetate and aqueous fractions. Bilateral common carotid artery occlusion for 10 min followed by 24 h reperfusion was used to induce cerebral IR injury. Following IR injury, ACE (100 and 200 mg/kg) was administered orally to animals for 7 days once daily. Behavioral outcomes (memory and sensorimotor functions) were evaluated using Morris water maze and neurological severity score. Cerebral infarct size, brain thiobarbituric acid reactive species, reduced glutathione, and superoxide dismutase activity was also determined. Treatment with ACE significantly ameliorated IR mediated deterioration of memory and sensorimotor functions and rise in brain oxidative stress in animals. The results of the present investigation revealed that ACE improved functional outcomes after cerebral IR injury, which may be attributed to its antioxidant properties.

Keywords: stroke, neuroprotection, ischemia reperfusion, herbal drugs

Procedia PDF Downloads 92
2437 Cognitive Functioning and Cortisol Suppression in Major Depression in a Long-Term Perspective

Authors: Pia Berner Hansson, Robert Murison Anders Lund, Hammar Åsa

Abstract:

Major Depressive Disorder (MDD) is often associated with high levels of stress and disturbances in the Hypothalamic Pituitary Adrenal (HPA) system, yielding high levels of cortisol, in addition to cognitive dysfunction. Previous studies in this patient group have shown a relationship between cortisol profile and cognitive functioning in the acute phase of MDD and that the patients had significantly less suppression after dexamethasone administration. However, few studies have investigated this relationship over time and in phases of symptom reduction. The aim of the present study was to examine the relationships between cortisol levels after the Dexamethasone Suppression Test (DST) and cognitive function in a long term perspective in MDD patients. Patients meeting the DSM-IV criteria for a MDD were included in the study and tested in symptom reduction. A control group was included. Cortisol was measured in saliva collected with Salivette sampling devices. Saliva samples were collected 4 times during a 24 hours period over two consecutive days: at awakening, after 45 minutes, after 7 hours and at 11 pm. Dexamethasone (1.0 mg) was given on Day 1 at 11 pm. The neuropsychological test battery consisted of standardized tests measuring memory and Executive Functioning (EF). Cortisol levels did not differ significantly between patients and controls on Day 1 or Day 2. Both groups showed significant suppression after Dexamethasone. There were no correlations between cortisol levels or suppression after Dexamethasone and cognitive measures. The results indicate that the HPA-axis functioning normalizes in phases of symptom reduction in MDD patients and that there no relation between cortisol profile and cognitive functioning in memory or EF.

Keywords: depression, MDD, cortisol, suppression, cognitive functioning

Procedia PDF Downloads 319
2436 Impact of Increasing Distributed Solar PV Systems on Distribution Networks in South Africa

Authors: Aradhna Pandarum

Abstract:

South Africa is experiencing an exponential growth of distributed solar PV installations. This is due to various factors with the predominant one being increasing electricity tariffs along with decreasing installation costs, resulting in attractive business cases to some end-users. Despite there being a variety of economic and environmental advantages associated with the installation of PV, their potential impact on distribution grids has yet to be thoroughly investigated. This is especially true since the locations of these units cannot be controlled by Network Service Providers (NSPs) and their output power is stochastic and non-dispatchable. This report details two case studies that were completed to determine the possible voltage and technical losses impact of increasing PV penetration in the Northern Cape of South Africa. Some major impacts considered for the simulations were ramping of PV generation due to intermittency caused by moving clouds, the size and overall hosting capacity and the location of the systems. The main finding is that the technical impact is different on a constrained feeder vs a non-constrained feeder. The acceptable PV penetration level is much lower for a constrained feeder than a non-constrained feeder, depending on where the systems are located.

Keywords: medium voltage networks, power system losses, power system voltage, solar photovoltaic

Procedia PDF Downloads 137
2435 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis

Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante

Abstract:

The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.

Keywords: dynamic analysis, long short-term memory, prediction, sepsis

Procedia PDF Downloads 111
2434 Linkage between Trace Element Distribution and Growth Ring Formation in Japanese Red Coral (Paracorallium japonicum)

Authors: Luan Trong Nguyen, M. Azizur Rahman, Yusuke Tamenori, Toshihiro Yoshimura, Nozomu Iwasaki, Hiroshi Hasegawa

Abstract:

This study investigated the distribution of magnesium (Mg), phosphorus (P), sulfur (S) and strontium (Sr) using micro X-ray fluorescence (µ-XRF) along the annual growth rings in the skeleton of Japanese red coral Paracorallium japonicum. The Mg, P and S distribution in µ-XRF mapping images correspond to the dark and light bands along the annual growth rings observed in microscopic images of the coral skeleton. The µ-XRF mapping data showed a positive correlation (r = 0.6) between P and S distribution in the coral skeleton. A contrasting distribution pattern of S and Mg along the axial skeleton of P. japonicum indicates a weak negative correlation (r = -0.2) between these two trace elements. The distribution pattern of S, P and Mg reveals linkage between their distributions and the formation of dark/light bands along the annual growth rings in the axial skeleton of P. japonicum. Sulfur and P were distributed in the organic matrix rich dark bands, while Mg was distributed in the light bands of the annual growth rings.

Keywords: µ-XRF, trace element, precious coral, Paracorallium japonicum

Procedia PDF Downloads 433
2433 The Relevance of Family Involvement in the Journey of Dementia Patients

Authors: Akankunda Veronicah Karuhanga

Abstract:

Dementia is an age mental disorder that makes victims lose normal functionality that needs delicate attention. It has been technically defined as a clinical syndrome that presents a number of difficulties in speech and other cognitive functions that change someone’s behaviors and can also cause impairments in activities of daily living, not forgetting a range of neurological disorders that bring memory loss and cognitive impairment. Family members are the primary healthcare givers and therefore, the way how they handle the situation in its early stages determines future deterioration syndromes like total memory loss. Unfortunately, most family members are ignorant about this condition and in most cases, the patients are brought to our facilities when their condition was already mismanaged by family members and we thus cannot do much. For example, incontinence can be managed at early stages through potty training or toilet scheduling before resorting to 24/7 diapers which are also not good. Professional Elderly care should be understood and practiced as an extension of homes, not a dumping place for people considered “abnormal” on account of ignorance. Immediate relatives should therefore be sensitized concerning the normalcy of dementia in the context of old age so that they can be understanding and supportive of dementia patients rather than discriminating against them as present-day lepers. There is a need to skill home-based caregivers on how to handle dementia in its early stages. Unless this is done, many of our elderly homes shall be filled with patients who should have been treated and supported from their homes. This skilling of home-based caregivers is a vital intervention because until elderly care is appreciated as a human moral obligation, many transactional rehabilitation centers will crop up and this shall be one of the worst moral decadences of our times.

Keywords: dementia, family, Alzheimers, relevancy

Procedia PDF Downloads 73
2432 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations

Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar

Abstract:

Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.

Keywords: cache behaviour, network-on-chip, performance profiling, vectorization

Procedia PDF Downloads 182
2431 Effects of Evening vs. Morning Training on Motor Skill Consolidation in Morning-Oriented Elderly

Authors: Maria Korman, Carmit Gal, Ella Gabitov, Avi Karni

Abstract:

The main question addressed in this study was whether the time-of-day wherein training is afforded is a significant factor for motor skill ('how-to', procedural knowledge) acquisition and consolidation into long term memory in the healthy elderly population. Twenty-nine older adults (60-75 years) practiced an explicitly instructed 5-element key-press sequence by repeatedly generating the sequence ‘as fast and accurately as possible’. Contribution of three parameters to acquisition, 24h post-training consolidation, and 1-week retention gains in motor sequence speed was assessed: (a) time of training (morning vs. evening group) (b) sleep quality (actigraphy) and (c) chronotype. All study participants were moderately morning type, according to the Morningness-Eveningness Questionnaire score. All participants had sleep patterns typical of age, with average sleep efficiency of ~ 82%, and approximately 6 hours of sleep. Speed of motor sequence performance in both groups improved to a similar extent during training session. Nevertheless, evening group expressed small but significant overnight consolidation phase gains, while morning group showed only maintenance of performance level attained at the end of training. By 1-week retention test, both groups showed similar performance levels with no significant gains or losses with respect to 24h test. Changes in the tapping patterns at 24h and 1-week post-training were assessed based on normalized Pearson correlation coefficients using the Fisher’s z-transformation in reference to the tapping pattern attained at the end of the training. Significant differences between the groups were found: the evening group showed larger changes in tapping patterns across the consolidation and retention windows. Our results show that morning-oriented older adults effectively acquired, consolidated, and maintained a new sequence of finger movements, following both morning and evening practice sessions. However, time-of-training affected the time-course of skill evolution in terms of performance speed, as well as the re-organization of tapping patterns during the consolidation period. These results are in line with the notion that motor training preceding a sleep interval may be beneficial for the long-term memory in the elderly. Evening training should be considered an appropriate time window for motor skill learning in older adults, even in individuals with morning chronotype.

Keywords: time-of-day, elderly, motor learning, memory consolidation, chronotype

Procedia PDF Downloads 125