Search results for: neural network architecture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6520

Search results for: neural network architecture

1270 Remote Sensing and Gis Use in Trends of Urbanization and Regional Planning

Authors: Sawan Kumar Jangid

Abstract:

The paper attempts to study various facets of urbanization and regional planning in the framework of the present conditions and future needs. Urbanization is a dynamic system in which development and changes are prominent features; which implies population growth and changes in the primary, secondary and tertiary sector in the economy. Urban population is increasing day by day due to a natural increase in population and migration from rural areas, and the impact is bound to have in urban areas in terms of infrastructure, environment, water supply and other vital resources. For the organized way of planning and monitoring the implementation of Physical urban and regional plans high-resolution satellite imagery is the potential solution. Now the Remote Sensing data is widely used in urban as well as regional planning, infrastructure planning mainly telecommunication and transport network planning, highway development, accessibility to market area development in terms of catchment and population built-up area density. With Remote Sensing it is possible to identify urban growth, which falls outside the formal planning control. Remote Sensing and GIS technique combined together facilitate the planners, in making a decision, for general public and investors to have relevant data for their use in minimum time. This paper sketches out the Urbanization modal for the future development of Urban and Regional Planning. The paper suggests, a dynamic approach towards regional development strategy.

Keywords: development, dynamic, migration, resolution

Procedia PDF Downloads 405
1269 Evaluation of Aquifer Protective Capacity and Soil Corrosivity Using Geoelectrical Method

Authors: M. T. Tsepav, Y. Adamu, M. A. Umar

Abstract:

A geoelectric survey was carried out in some parts of Angwan Gwari, an outskirt of Lapai Local Government Area on Niger State which belongs to the Nigerian Basement Complex, with the aim of evaluating the soil corrosivity, aquifer transmissivity and protective capacity of the area from which aquifer characterisation was made. The G41 Resistivity Meter was employed to obtain fifteen Schlumberger Vertical Electrical Sounding data along profiles in a square grid network. The data were processed using interpex 1-D sounding inversion software, which gives vertical electrical sounding curves with layered model comprising of the apparent resistivities, overburden thicknesses and depth. This information was used to evaluate longitudinal conductance and transmissivities of the layers. The results show generally low resistivities across the survey area and an average longitudinal conductance variation from 0.0237Siemens in VES 6 to 0.1261 Siemens in VES 15 with almost the entire area giving values less than 1.0 Siemens. The average transmissivity values range from 96.45 Ω.m2 in VES 4 to 299070 Ω.m2 in VES 1. All but VES 4 and VES14 had an average overburden greater than 400 Ω.m2, these results suggest that the aquifers are highly permeable to fluid movement within, leading to the possibility of enhanced migration and circulation of contaminants in the groundwater system and that the area is generally corrosive.

Keywords: geoelectric survey, corrosivity, protective capacity, transmissivity

Procedia PDF Downloads 322
1268 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications

Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui

Abstract:

Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.

Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow

Procedia PDF Downloads 250
1267 A System Architecture for Hand Gesture Control of Robotic Technology: A Case Study Using a Myo™ Arm Band, DJI Spark™ Drone, and a Staubli™ Robotic Manipulator

Authors: Sebastian van Delden, Matthew Anuszkiewicz, Jayse White, Scott Stolarski

Abstract:

Industrial robotic manipulators have been commonplace in the manufacturing world since the early 1960s, and unmanned aerial vehicles (drones) have only begun to realize their full potential in the service industry and the military. The omnipresence of these technologies in their respective fields will only become more potent in coming years. While these technologies have greatly evolved over the years, the typical approach to human interaction with these robots has not. In the industrial robotics realm, a manipulator is typically jogged around using a teach pendant and programmed using a networked computer or the teach pendant itself via a proprietary software development platform. Drones are typically controlled using a two-handed controller equipped with throttles, buttons, and sticks, an app that can be downloaded to one’s mobile device, or a combination of both. This application-oriented work offers a novel approach to human interaction with both unmanned aerial vehicles and industrial robotic manipulators via hand gestures and movements. Two systems have been implemented, both of which use a Myo™ armband to control either a drone (DJI Spark™) or a robotic arm (Stäubli™ TX40). The methodologies developed by this work present a mapping of armband gestures (fist, finger spread, swing hand in, swing hand out, swing arm left/up/down/right, etc.) to either drone or robot arm movements. The findings of this study present the efficacy and limitations (precision and ergonomic) of hand gesture control of two distinct types of robotic technology. All source code associated with this project will be open sourced and placed on GitHub. In conclusion, this study offers a framework that maps hand and arm gestures to drone and robot arm control. The system has been implemented using current ubiquitous technologies, and these software artifacts will be open sourced for future researchers or practitioners to use in their work.

Keywords: human robot interaction, drones, gestures, robotics

Procedia PDF Downloads 140
1266 Beneficial Effects of Curcumin against Stress Oxidative and Mitochondrial Dysfunction Induced by Trinitrobenzene Sulphonic Acid in Colon

Authors: Souad Mouzaoui, Bahia Djerdjouri

Abstract:

Oxidative stress is one of the main factors involved in the onset and chronicity of inflammatory bowel disease (IBD). In this study, we investigated the beneficial effects of a potent natural antioxidant, curcumin (Cur) on colitis and mitochondrial dysfunction in trinitrobenzene sulfonic acid (TNBS)-induced colitis in mice. Rectal instillation of the chemical irritant TNBS (30 mg kg-1) induced the disruption of distal colonic architecture and a massive inflammatory cells influx to the mucosa and submucosa layers. Under these conditions, daily administration of Cur (25 mg kg-1) efficiently decreased colitis scores in the inflamed distal colon by reducing leukocyte infiltrate as attested by reduced myeloperoxidase (MPO) activity. Moreover, the levels of nitrite, an end product of inducible NO synthase activity (iNOS) and malonyl dialdehyde (MDA), a marker of lipid peroxidation increased in a time depending manner in response to TNBS challenge. Conversely, the markers of the antioxidant pool, reduced glutathione (GSH) and catalase activity (CAT) were drastically reduced. Cur attenuated oxidative stress markers and partially restored CAT and GSH levels. Moreover, our results expanded the effect of Cur on TNBS-induced colonic mitochondrial dysfunction. In fact, TNBS induced mitochondrial swelling and lipids peroxidation. These events reflected in the opening of mitochondrial transition pore and could be an initial indication in the cascade process leading to cell death. TNBS inhibited also mitochondrial respiratory activity, caused overproduction of mitochondrial superoxide anion (O2-.) and reduced level of mitochondrial GSH. Nevertheless, Cur reduced the extent of mitochondrial oxidative stress induced by TNBS and restored colonic mitochondrial function. In conclusion, our results showed the critical role of oxidative stress in TNBS-induced colitis. They highlight the role of colonic mitochondrial dysfunction induced by TNBS, as a potential source of oxidative damages. Due to its potent antioxidant properties, Cur opens a promising therapeutic approach against oxidative inflammation in IBD.

Keywords: colitis, curcumin, mitochondria, oxidative stress, TNBS

Procedia PDF Downloads 229
1265 Investigation of Optical, Film Formation and Magnetic Properties of PS Lates/MNPs Composites

Authors: Saziye Ugur

Abstract:

In this study, optical, film formation, morphological and the magnetic properties of a nanocomposite system, composed of polystyrene (PS) latex polymer and core-shell magnetic nanoparticles (MNPs) is presented. Nine different mixtures were prepared by mixing of PS latex dispersion with different amount of MNPs in the range of (0- 100 wt%). PS/MNPs films were prepared from these mixtures on glass substrates by drop casting method. After drying at room temperature, each film sample was separately annealed at temperatures from 100 to 250 °C for 10 min. In order to monitor film formation process, the transmittance of these composites was measured after each annealing step as a function of MNPs content. Below a critical MNPs content (30 wt%), it was found that PS percolates into the MNPs hard phase and forms an interconnected network upon annealing. The transmission results showed above this critical value, PS latexes were no longer film forming at all temperatures. Besides, the PS/MNPs composite films also showed excellent magnetic properties. All composite films showed superparamagnetic behaviors. The saturation magnetisation (Ms) first increased up to 0.014 emu in the range of (0-50) wt% MNPs content and then decreased to 0.010 emu with increasing MNPs content. The highest value of Ms was approximately 0.020 emu and was obtained for the film filled with 85 wt% MNPs content. These results indicated that the optical, film formation and magnetic properties of PS/MNPs composite films can be readily tuned by varying loading content of MNPs nanoparticles.

Keywords: composite film, film formation, magnetic nanoparticles, ps latex, transmission

Procedia PDF Downloads 234
1264 Designing of Induction Motor Efficiency Monitoring System

Authors: Ali Mamizadeh, Ires Iskender, Saeid Aghaei

Abstract:

Energy is one of the important issues with high priority property in the world. Energy demand is rapidly increasing depending on the growing population and industry. The useable energy sources in the world will be insufficient to meet the need for energy. Therefore, the efficient and economical usage of energy sources is getting more importance. In a survey conducted among electric consuming machines, the electrical machines are consuming about 40% of the total electrical energy consumed by electrical devices and 96% of this consumption belongs to induction motors. Induction motors are the workhorses of industry and have very large application areas in industry and urban systems like water pumping and distribution systems, steel and paper industries and etc. Monitoring and the control of the motors have an important effect on the operating performance of the motor, driver selection and replacement strategy management of electrical machines. The sensorless monitoring system for monitoring and calculating efficiency of induction motors are studied in this study. The equivalent circuit of IEEE is used in the design of this study. The terminal current and voltage of induction motor are used in this motor to measure the efficiency of induction motor. The motor nameplate information and the measured current and voltage are used in this system to calculate accurately the losses of induction motor to calculate its input and output power. The efficiency of the induction motor is monitored online in the proposed method without disconnecting the motor from the driver and without adding any additional connection at the motor terminal box. The proposed monitoring system measure accurately the efficiency by including all losses without using torque meter and speed sensor. The monitoring system uses embedded architecture and does not need to connect to a computer to measure and log measured data. The conclusion regarding the efficiency, the accuracy and technical and economical benefits of the proposed method are presented. The experimental verification has been obtained on a 3 phase 1.1 kW, 2-pole induction motor. The proposed method can be used for optimal control of induction motors, efficiency monitoring and motor replacement strategy.

Keywords: induction motor, efficiency, power losses, monitoring, embedded design

Procedia PDF Downloads 329
1263 Scientific Linux Cluster for BIG-DATA Analysis (SLBD): A Case of Fayoum University

Authors: Hassan S. Hussein, Rania A. Abul Seoud, Amr M. Refaat

Abstract:

Scientific researchers face in the analysis of very large data sets that is increasing noticeable rate in today’s and tomorrow’s technologies. Hadoop and Spark are types of software that developed frameworks. Hadoop framework is suitable for many Different hardware platforms. In this research, a scientific Linux cluster for Big Data analysis (SLBD) is presented. SLBD runs open source software with large computational capacity and high performance cluster infrastructure. SLBD composed of one cluster contains identical, commodity-grade computers interconnected via a small LAN. SLBD consists of a fast switch and Gigabit-Ethernet card which connect four (nodes). Cloudera Manager is used to configure and manage an Apache Hadoop stack. Hadoop is a framework allows storing and processing big data across the cluster by using MapReduce algorithm. MapReduce algorithm divides the task into smaller tasks which to be assigned to the network nodes. Algorithm then collects the results and form the final result dataset. SLBD clustering system allows fast and efficient processing of large amount of data resulting from different applications. SLBD also provides high performance, high throughput, high availability, expandability and cluster scalability.

Keywords: big data platforms, cloudera manager, Hadoop, MapReduce

Procedia PDF Downloads 342
1262 Mobility-Aware Relay Selection in Two Hop Unmanned Aerial Vehicles Network

Authors: Tayyaba Hussain, Sobia Jangsher, Saqib Ali, Saqib Ejaz

Abstract:

Unmanned Aerial vehicles (UAV’s) have gained great popularity due to their remoteness, ease of deployment and high maneuverability in different applications like real-time surveillance, image capturing, weather atmospheric studies, disaster site monitoring and mapping. These applications can involve a real-time communication with the ground station. However, altitude and mobility possess a few challenges for the communication. UAV’s at high altitude usually require more transmit power. One possible solution can be with the use of multi hops (UAV’s acting as relays) and exploiting the mobility pattern of the UAV’s. In this paper, we studied a relay (UAV’s acting as relays) selection for a reliable transmission to a destination UAV. We exploit the mobility information of the UAV’s to propose a Mobility-Aware Relay Selection (MARS) algorithm with the objective of giving improved data rates. The results are compared with Non Mobility-Aware relay selection scheme and optimal values. Numerical results show that our proposed MARS algorithm gives 6% better achievable data rates for the mobile UAV’s as compared with Non MobilityAware relay selection scheme. On average a decrease of 20.2% in data rate is achieved with MARS as compared with SDP solver in Yalmip.

Keywords: mobility aware, relay selection, time division multiple acess, unmanned aerial vehicle

Procedia PDF Downloads 224
1261 Virtual and Visual Reconstructions in Museum Expositions

Authors: Ekaterina Razuvalova, Konstantin Rudenko

Abstract:

In this article the most successful examples of international visual and virtual reconstructions of historical and culture objects, which are based on informative and communicative technologies, are represented. 3D reconstructions can demonstrate outward appearance, visualize different hypothesis, connected to represented object. Virtual reality can give us any daytime and season, any century and environment. We can see how different people from different countries and different era lived; we can get different information about any object; we can see historical complexes in real city environment, which are damaged or vanished. These innovations confirm the fact, that 3D reconstruction is important in museum development. Considering the most interesting examples of visual and virtual reconstructions, we can notice, that visual reconstruction is a 3D image of different objects, historical complexes, buildings and phenomena. They are constant and we can see them only as momentary objects. And virtual reconstruction is some environment with its own time, rules and phenomena. These reconstructions are continuous; seasons, daytime and natural conditions can change there. They can demonstrate abilities of virtual world existence. In conclusion: new technologies give us opportunities to expand the boundaries of museum space, improve abilities of museum expositions, create emotional atmosphere of game immersion, which can interest visitor. Usage of network sources allows increasing the number of visitors and virtual reconstruction opportunities show creative side of museum business.

Keywords: computer technologies, historical reconstruction, museums, museum expositions, virtual reconstruction

Procedia PDF Downloads 312
1260 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 267
1259 Saudi Human Awareness Needs: A Survey in How Human Causes Errors and Mistakes Leads to Leak Confidential Data with Proposed Solutions in Saudi Arabia

Authors: Amal Hussain Alkhaiwani, Ghadah Abdullah Almalki

Abstract:

Recently human errors have increasingly become a very high factor in security breaches that may affect confidential data, and most of the cyber data breaches are caused by human errors. With one individual mistake, the attacker will gain access to the entire network and bypass the implemented access controls without any immediate detection. Unaware employees will be vulnerable to any social engineering cyber-attacks. Providing security awareness to People is part of the company protection process; the cyber risks cannot be reduced by just implementing technology; the human awareness of security will significantly reduce the risks, which encourage changes in staff cyber-awareness. In this paper, we will focus on Human Awareness, human needs to continue the required security education level; we will review human errors and introduce a proposed solution to avoid the breach from occurring again. Recently Saudi Arabia faced many attacks with different methods of social engineering. As Saudi Arabia has become a target to many countries and individuals, we needed to initiate a defense mechanism that begins with awareness to keep our privacy and protect the confidential data against possible intended attacks.

Keywords: cybersecurity, human aspects, human errors, human mistakes, security awareness, Saudi Arabia, security program, security education, social engineering

Procedia PDF Downloads 135
1258 A Comparative Evaluation of the SIR and SEIZ Epidemiological Models to Describe the Diffusion Characteristics of COVID-19 Polarizing Viewpoints on Online

Authors: Maryam Maleki, Esther Mead, Mohammad Arani, Nitin Agarwal

Abstract:

This study is conducted to examine how opposing viewpoints related to COVID-19 were diffused on Twitter. To accomplish this, six datasets using two epidemiological models, SIR (Susceptible, Infected, Recovered) and SEIZ (Susceptible, Exposed, Infected, Skeptics), were analyzed. The six datasets were chosen because they represent opposing viewpoints on the COVID-19 pandemic. Three of the datasets contain anti-subject hashtags, while the other three contain pro-subject hashtags. The time frame for all datasets is three years, starting from January 2020 to December 2022. The findings revealed that while both models were effective in evaluating the propagation trends of these polarizing viewpoints, the SEIZ model was more accurate with a relatively lower error rate (6.7%) compared to the SIR model (17.3%). Additionally, the relative error for both models was lower for anti-subject hashtags compared to pro-subject hashtags. By leveraging epidemiological models, insights into the propagation trends of polarizing viewpoints on Twitter were gained. This study paves the way for the development of methods to prevent the spread of ideas that lack scientific evidence while promoting the dissemination of scientifically backed ideas.

Keywords: mathematical modeling, epidemiological model, seiz model, sir model, covid-19, twitter, social network analysis, social contagion

Procedia PDF Downloads 43
1257 Utilization of Traditional Medicine for Treatment of Selected Illnesses among Crop-Farming Households in Edo State, Nigeria

Authors: Adegoke A. Adeyelu, Adeola T. Adeyelu, S. D. Y. Alfred, O. O. Fasina

Abstract:

This study examines the use of traditional medicines for the treatment of selected illnesses among crop-farming households in Edo State, Nigeria. A sample size of ninety (90) households were randomly selected for the study. Data were collected with a structured questionnaire alongside focus group discussions (FGD). Result shows that the mean age was 50 years old, the majority (76.7%) of the sampled farmers were below 60 years old. The majority (80.0%) of the farmers were married, about (92.2%) had formal education. It exposes that the majority of the respondents (76.7%) had household size of between 1-10 persons, about 55.6% had spent 11 years and above in crop farming. malaria (8th ), waist pains (7th ), farm injuries ( 6th ), cough (5th), acute headache(4th), skin infection (3rd), typhoid (2nd) and tuberculosis (1st ) were the most and least treated illness. Respondents (80%) had spent N10,000.00 ($27) and less on treatment of illnesses, 8.9% had spent N10,000.00-N20,000.0027 ($27-$55) 4.4% had spent between N20,100-N30,000.00 ($27-$83) while 6.7% had spent more than N30,100.00 ($83) on treatment of illnesses in the last one (1) year prior to the study. Age, years of farming, farm size, household size, level of income, cost of treatment, level of education, social network, and culture are some of the statistically significant factors influencing the utilization of traditional medicine. Farmers should be educated on methods of preventing illnesses, which is far cheaper than the curative.

Keywords: crop farming-households, selected illnesses, traditional medicines, Edo State

Procedia PDF Downloads 178
1256 Treatment of Greywater at Household by Using Ceramic Tablet Membranes

Authors: Abdelkader T. Ahmed

Abstract:

Greywater is any wastewater draining from a household including kitchen sinks and bathroom tubs, except toilet wastes. Although this used water may contain grease, food particles, hair, and any number of other impurities, it may still be suitable for reuse after treatment. Greywater reusing serves two purposes including reduction the amount of freshwater needed to supply a household, and reduction the amount of wastewater entering sewer systems. This study aims to investigate and design a simple and cheap unit to treat the greywater in household via using ceramic membranes and reuse it in supplying water for toilet flushing. The study include an experimental program for manufacturing several tablet ceramic membranes from clay and sawdust with three different mixtures. The productivity and efficiency of these ceramic membranes were investigated by chemical and physical tests for greywater before and after filtration through these membranes. Then a treatment unit from this ceramic membrane was designed based on the experimental results of lab tests. Results showed that increase sawdust percent with the mixture increase the flow rate and productivity of treated water but decrease in the same time the water quality. The efficiency of the new ceramic membrane reached 95%. The treatment unit save 0.3 m3/day water for toilet flushing without need to consume them from the fresh water supply network.

Keywords: ceramic membranes, filtration, greywater, wastewater treatment

Procedia PDF Downloads 316
1255 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies

Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan

Abstract:

The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.

Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping

Procedia PDF Downloads 74
1254 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study

Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple

Abstract:

There is a dramatic surge in the adoption of machine learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. With the application of learning methods in such diverse domains, artificial intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been on developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and three defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt machine learning techniques in security-critical areas such as the nuclear industry without rigorous testing since they may be vulnerable to adversarial attacks. While common defence methods can effectively defend against different attacks, none of the three considered can provide protection against all five adversarial attacks analysed.

Keywords: adversarial machine learning, attacks, defences, nuclear industry, crack detection

Procedia PDF Downloads 138
1253 Hermitical Landscapes: The Congregation of Saint Paul of Serra De Ossa

Authors: Rolando Volzone

Abstract:

The Congregation of Saint Paul of Serra de Ossa (Ossa Mountain) was founded in 1482, originated by the eremitic movement of the homens da pobre vida (poor life men), which is documented since 1366. The community of hermits expanded up to the first half of the 15th century, mostly in southern Portugal in the Alentejo region. In 1578, following a process of institutionalization led by the Church, an autonomous congregation was set up, affiliated in the Hungarian Order of Saint Paul the First Hermit, until 1834, when the decree of dissolution of the religious orders disbanded all the convents and monasteries in Portugal. The architectural evidences that reached our days as a legacy of the hermitical movement in Serra de Ossa, although studied and analysed from an historical point of view, are still little known with respect to the architectural characteristics of its physical implantation and its relationship with the natural systems. This research intends to expose the appropriation process of the locus eremus as a starting point for the interpretation of this landscape, evidencing the close relationship between the religious experience and the physical space chosen to reach the perfection of the soul. The locus eremus is thus determined not only by practical aspects such as the absolute and relative location, orography, existence of water resources, or the King’s favoring to the religious and settlement action of the hermits, but also by spiritual aspects related to the symbolism of the physical elements present and the solitary walk of these men. These aspects, combined with the built architectural elements and other exerted human action, may be fertile ground for the definition of a hypothetical hermitical landscape based on the sufficiently distinctive characteristics that sustain it. The landscape built by these hermits is established as a cultural and material heritage, and its preservation is of utmost importance. They deeply understood this place and took advantage of its natural resources, manipulating them in an ecological and economically sustainable way, respecting the place, without overcoming its own genius loci but becoming part of it.

Keywords: architecture, congregation of Saint Paul of Serra de Ossa, heremitical landscape, locus eremus

Procedia PDF Downloads 213
1252 Inferring Thimlich Ohinga Gender Identity Through Ethnoarchaeological Analysis

Authors: David Maina Muthegethi

Abstract:

The Victoria Basin is associated with gateway for migration to Southern part of Africa. Different communities migrated through the region including the Bantus and Nilotic communities that occupy present day Kenya and Tanzania. A distinct culture of dry-stone technology emerged around 15th century current era, a period associated with peopling of the western Kenya region. One of the biggest dry-stone walls enclosure is Thimlich Ohinga archaeological site. The site was constructed around fourteenth century current era. Architectural design was oval shaped stone structures that were around 4 meters and 2 meters in length and width respectively. The main subsistence strategies of the community that was crop faming, pastoralism, fishing, hunting and gathering. This paper attempts to examine gender dynamics of Thimlich Ohinga society. At that end, attempts are made to infer gender roles as manifested in archaeological record. Therefore, the study entails examination of material evidence excavated from the site. Also, ethnoarchaeological study of contemporary Luo community was undertaken in order to make inferences and analogies concerning gender roles of Thimlich Ohinga society. Overall, the study involved examination of cultural materials excavated from Thimlich Ohinga, extensive survey of the site and ethnography of Luo community. In total, an extensive survey and interviews of 20 households was undertaken in South Kanyamkango ward, Migori County in Western Kenya. The key findings point out that Thimlich Ohinga gender identities were expressed in material forms through architecture, usage of spaces, subsistence strategies, dietary patterns and household organization. Also, gender as social identity was dynamic and responsive to diversification of subsistence strategies and intensification of regional trade as documented in contemporary Luo community. The paper reiterates importance of ethnoarchaeological methods in reconstruction of past social organization as manifested in material record.

Keywords: ethnoarchaeological, gender, subsistence patterns, Thimlich Ohinga

Procedia PDF Downloads 62
1251 Achieving High Renewable Energy Penetration in Western Australia Using Data Digitisation and Machine Learning

Authors: A. D. Tayal

Abstract:

The energy industry is undergoing significant disruption. This research outlines that, whilst challenging; this disruption is also an emerging opportunity for electricity utilities. One such opportunity is leveraging the developments in data analytics and machine learning. As the uptake of renewable energy technologies and complimentary control systems increases, electricity grids will likely transform towards dense microgrids with high penetration of renewable generation sources, rich in network and customer data, and linked through intelligent, wireless communications. Data digitisation and analytics have already impacted numerous industries, and its influence on the energy sector is growing, as computational capabilities increase to manage big data, and as machines develop algorithms to solve the energy challenges of the future. The objective of this paper is to address how far the uptake of renewable technologies can go given the constraints of existing grid infrastructure and provides a qualitative assessment of how higher levels of renewable energy penetration can be facilitated by incorporating even broader technological advances in the fields of data analytics and machine learning. Western Australia is used as a contextualised case study, given its abundance and diverse renewable resources (solar, wind, biomass, and wave) and isolated networks, making a high penetration of renewables a feasible target for policy makers over coming decades.

Keywords: data, innovation, renewable, solar

Procedia PDF Downloads 347
1250 Distributed Control Strategy for Dispersed Energy Storage Units in the DC Microgrid Based on Discrete Consensus

Authors: Hanqing Yang, Xiang Meng, Qi Li, Weirong Chen

Abstract:

The SOC (state of charge) based droop control has limitations on the load power sharing among different energy storage units, due to the line impedance. In this paper, a distributed control strategy for dispersed energy storage units in the DC microgrid based on discrete consensus is proposed. Firstly, a sparse information communication network is built. Thus, local controllers can communicate with its neighbors using voltage, current and SOC information. An average voltage of grid can be evaluated to compensate voltage offset by droop control, and an objective virtual resistance fulfilling above requirement can be dynamically calculated to distribute load power according to the SOC of the energy storage units. Then, the stability of the whole system and influence of communication delay are analyzed. It can be concluded that this control strategy can improve the robustness and flexibility, because of having no center controller. Finally, a model of DC microgrid with dispersed energy storage units and loads is built, the discrete distributed algorithm is established and communication protocol is developed. The co-simulation between Matlab/Simulink and JADE (Java agent development framework) has verified the effectiveness of proposed control strategy.

Keywords: dispersed energy storage units, discrete consensus algorithm, state of charge, communication delay

Procedia PDF Downloads 256
1249 Ecosystems: An Analysis of Generation Z News Consumption, Its Impact on Evolving Concepts and Applications in Journalism

Authors: Bethany Wood

Abstract:

The world pandemic led to a change in the way social media was used by audiences, with young people spending more hours on the platform due to lockdown. Reports by Ofcom have demonstrated that the internet is the second most popular platform for accessing news after television in the UK with social media and the internet ranked as the most popular platform to access news for those aged between 16-24. These statistics are unsurprising considering that at the time of writing, 98 percent of Generation Z (Gen Z) owned a smartphone and the subsequent ease and accessibility of social media. Technology is constantly developing and with this, its importance is becoming more prevalent with each generation: the Baby Boomers (1946-1964) consider it something useful whereas millennials (1981-1997) believe it a necessity for day to day living. Gen Z, otherwise known as the digital native, have grown up with this technology at their fingertips and social media is a norm. It helps form their identity, their affiliations and opens gateways for them to engage with news in a new way. It is a common misconception that Gen Z do not consume news, they are simply doing so in a different way to their predecessors. Using a sample of 800 18-20 year olds whilst utilising Generational theory, Actor Network Theory and the Social Shaping of Technology, this research provides a critical analyse regarding how Gen Z’s news consumption and engagement habits are developing along with technology to sculpture the future format of news and its distribution. From that perspective, allied with the empirical approach, it is possible to provide research orientated advice for the industry and even help to redefine traditional concepts of journalism.

Keywords: journalism, generation z, digital, social media

Procedia PDF Downloads 59
1248 Lessons Learned from Covid19 - Related ERT in Universities

Authors: Sean Gay, Cristina Tat

Abstract:

This presentation will detail how a university in Western Japan has implemented its English for Academic Purposes (EAP) program during the onset of CoViD-19 in the spring semester of 2020. In the spring semester of 2020, after a 2 week delay, all courses within the School of Policy Studies EAP Program at Kwansei Gakuin University were offered in an online asynchronous format. The rationale for this decision was not to disadvantage students who might not have access to devices necessary for taking part in synchronous online lessons. The course coordinators were tasked with consolidating the materials originally designed for face-to-face14 week courses for a 12 week asynchronous online semester and with uploading the modified course materials to Luna, the university’s network, which is a modified version of Blackboard. Based on research to determine the social and academic impacts of this CoViD-19 ERT approach on the students who took part in this EAP program, this presentation explains how future curriculum design and implementation can be managed in a post-CoViD world. There are a wide variety of lessons that were salient. The role of the classroom as a social institution was very prominent; however, awareness of cognitive burdens and strategies to mitigate that burden may be more valuable for teachers. The lessons learned during this period of ERT can help teachers moving forward.

Keywords: asynchronous online learning, emergency remote teaching (ERT), online curriculum design, synchronous online learning

Procedia PDF Downloads 188
1247 Phishing Detection: Comparison between Uniform Resource Locator and Content-Based Detection

Authors: Nuur Ezaini Akmar Ismail, Norbazilah Rahim, Norul Huda Md Rasdi, Maslina Daud

Abstract:

A web application is the most targeted by the attacker because the web application is accessible by the end users. It has become more advantageous to the attacker since not all the end users aware of what kind of sensitive data already leaked by them through the Internet especially via social network in shake on ‘sharing’. The attacker can use this information such as personal details, a favourite of artists, a favourite of actors or actress, music, politics, and medical records to customize phishing attack thus trick the user to click on malware-laced attachments. The Phishing attack is one of the most popular attacks for social engineering technique against web applications. There are several methods to detect phishing websites such as Blacklist/Whitelist based detection, heuristic-based, and visual similarity-based detection. This paper illustrated a comparison between the heuristic-based technique using features of a uniform resource locator (URL) and visual similarity-based detection techniques that compares the content of a suspected phishing page with the legitimate one in order to detect new phishing sites based on the paper reviewed from the past few years. The comparison focuses on three indicators which are false positive and negative, accuracy of the method, and time consumed to detect phishing website.

Keywords: heuristic-based technique, phishing detection, social engineering and visual similarity-based technique

Procedia PDF Downloads 160
1246 Formulating a Definition of Hate Speech: From Divergence to Convergence

Authors: Avitus A. Agbor

Abstract:

Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.

Keywords: hate speech, international human rights law, international criminal law, freedom of expression

Procedia PDF Downloads 50
1245 Competitiveness of a Share Autonomous Electrical Vehicle Fleet Compared to Traditional Means of Transport: A Case Study for Transportation Network Companies

Authors: Maximilian Richter

Abstract:

Implementing shared autonomous electric vehicles (SAEVs) has many advantages. The main advantages are achieved when SAEVs are offered as on-demand services by a fleet operator. However, autonomous mobility on demand (AMoD) will be distributed nationwide only if a fleet operation is economically profitable for the operator. This paper proposes a microscopic approach to modeling two implementation scenarios of an AMoD fleet. The city of Zurich is used as a case study, with the results and findings being generalizable to other similar European and North American cities. The data are based on the traffic model of the canton of Zurich (Gesamtverkehrsmodell des Kantons Zürich (GVM-ZH)). To determine financial profitability, demand is based on the simulation results and combined with analyzing the costs of a SAEV per kilometer. The results demonstrate that depending on the scenario; journeys can be offered profitably to customers for CHF 0.3 up to CHF 0.4 per kilometer. While larger fleets allowed for lower price levels and increased profits in the long term, smaller fleets exhibit elevated efficiency levels and profit opportunities per day. The paper concludes with recommendations for how fleet operators can prepare themselves to maximize profit in the autonomous future.

Keywords: autonomous vehicle, mobility on demand, traffic simulation, fleet provider

Procedia PDF Downloads 109
1244 Thermal Comfort in Office Rooms in a Historic Building with Modernized Heating, Ventilation and Air Conditioning Systems

Authors: Hossein Bakhtiari, Mathias Cehlin, Jan Akander

Abstract:

Envelopes with low thermal performance is a common characteristic in many European historic buildings which leads to higher energy demand for heating and cooling as well as insufficient thermal comfort for the occupants. This paper presents the results of a study on the thermal comfort in the City Hall (Rådhuset) in Gävle, Sweden. This historic building is currently used as an office building. It is equipped with two relatively modern mechanical heat recovery ventilation systems with displacement ventilation supply devices in the offices. The district heating network heats the building via pre-heat supply air and radiators. Summer cooling comes from an electric heat pump that rejects heat into the exhaust ventilation air. A building management system controls HVAC equipment (heating, ventilation and air conditioning). The methodology is based on on-site measurements, data logging on the management system and evaluating the occupants’ perception of a summer and a winter period indoor environment using a standardized questionnaire. The main aim of the study is to investigate whether or not it is enough to have modernized HVAC systems to get adequate thermal comfort in a historic building with poor envelope performance used as an office building in Nordic climate conditions.

Keywords: historic buildings, on-site measurements, standardized questionnaire, thermal comfort

Procedia PDF Downloads 356
1243 The Effect of Molecular Weight on the Cross-Linking of Two Different Molecular Weight LLDPE Samples

Authors: Ashkan Forootan, Reza Rashedi

Abstract:

Polyethylene has wide usage areas such as blow molding, pipe, film, cable insulation. However, regardless to its growing applications, it has some constraints such as the limited 70C operating temperature. Polyethylene thermo setting procedure whose molecules are knotted and 3D-molecular-network formed , is developed to conquer the above problem and to raise the applicable temperature of the polymer. This paper reports the cross-linking for two different molecular weight grades of LLDPE by adding 0.5, 1, and 2% of DCP (Dicumyl Peroxide). DCP was chosen for its prevalence among various cross-linking agents. Structural parameters such as molecular weight, melt flow index, comonomer, number of branches,etc. were obtained through the use of relative tests as Gel Permeation Chromatography and Fourier Transform Infra Red spectrometer. After calculating the percentage of gel content, properties of the pure and cross-linked samples were compared by thermal and mechanical analysis with DMTA and FTIR and the effects of cross-linking like viscous and elastic modulus were discussed by using various structural paprameters such as MFI, molecular weight, short chain branches, etc. Studies showed that cross-linked polymer, unlike the pure one, had a solid state with thermal mechanical properties in the range of 110 to 120C and this helped overcome the problem of using polyethylene in temperatures near the melting point.

Keywords: LLDPE, cross-link, structural parameters, DCP, DMTA, GPC

Procedia PDF Downloads 290
1242 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 53
1241 The Ontological Memory in Bergson as a Conceptual Tool for the Analysis of the Digital Conjuncture

Authors: Douglas Rossi Ramos

Abstract:

The current digital conjuncture, called by some authors as 'Internet of Things' (IoT), 'Web 2.0' or even 'Web 3.0', consists of a network that encompasses any communication of objects and entities, such as data, information, technologies, and people. At this juncture, especially characterized by an "object socialization," communication can no longer be represented as a simple informational flow of messages from a sender, crossing a channel or medium, reaching a receiver. The idea of communication must, therefore, be thought of more broadly in which it is possible to analyze the process communicative from interactions between humans and nonhumans. To think about this complexity, a communicative process that encompasses both humans and other beings or entities communicating (objects and things), it is necessary to constitute a new epistemology of communication to rethink concepts and notions commonly attributed to humans such as 'memory.' This research aims to contribute to this epistemological constitution from the discussion about the notion of memory according to the complex ontology of Henri Bergson. Among the results (the notion of memory in Bergson presents itself as a conceptual tool for the analysis of posthumanism and the anthropomorphic conjuncture of the new advent of digital), there was the need to think about an ontological memory, analyzed as a being itself (being itself of memory), as a strategy for understanding the forms of interaction and communication that constitute the new digital conjuncture, in which communicating beings or entities tend to interact with each other. Rethinking the idea of communication beyond the dimension of transmission in informative sequences paves the way for an ecological perspective of the digital dwelling condition.

Keywords: communication, digital, Henri Bergson, memory

Procedia PDF Downloads 139