Search results for: RBF neural network modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6771

Search results for: RBF neural network modelling

1251 An Application Framework for Integrating Wireless Sensor and Actuator Networks for Precision Farming as Web of Things to Cloud Interface Using Platform as a Service

Authors: Sumaya Iqbal, Aijaz Ahmad Reshi

Abstract:

The advances in sensor and embedded technologies have led to rapid developments in Wireless Sensor Networks (WSNs). Presently researchers focus on the integration of WSNs to Internet for their pervasive availability to access these network resources as the interoperable subsystems. The recent computing technologies like cloud computing has made the resource sharing as a converged infrastructure with required service interfaces for the shared resources over the Internet. This paper presents application architecture for wireless Sensor and Actuator Networks (WSANS) following web of things, which allows easy integration of each node to the Internet in order to provide them web accessibility. The architecture enables the sensors and actuator nodes accessed and controlled using cloud interface on WWW. The application architecture was implemented using existing web and its emerging technologies. In particular Representational State Transfer protocol (REST) was extended for the specific requirements of the application. Cloud computing environment has been used as a development platform for the application to assess the possibility of integrating the WSAN nodes to Cloud services. The mushroom farm environment monitoring and control using WSANs has been taken as a research use case.

Keywords: WSAN, REST, web of things, ZigBee, cloud interface, PaaS, sensor gateway

Procedia PDF Downloads 105
1250 The Connection Between the International Law and the Legal Consultation on the Social Media

Authors: Amir Farouk Ahmed Ali Hussin

Abstract:

Social media, such as Facebook, LinkedIn and Ex-Twitter have experienced exponential growth and a remarkable adoption rate in recent years. They give fantastic means of online social interactions and communications with family, friends, and colleagues from around the corner or across the globe, and they have become an important part of daily digital interactions for more than one and a half billion users around the world. The personal information sharing practices that social network providers encourage have led to their success as innovative social interaction platforms. Moreover, these practices have outcome in concerns with respect to privacy and security from different stakeholders. Guiding these privacy and security concerns in social networks is a must for these networks to be sustainable. Real security and privacy tools may not be enough to address existing concerns. Some points should be followed to protect users from the existing risks. In this research, we have checked the various privacy and security issues and concerns pertaining to social media. However, we have classified these privacy and security issues and presented a thorough discussion of the effects of these issues and concerns on the future of the social networks. In addition, we have presented a set of points as precaution measures that users can consider to address these issues.

Keywords: international legal, consultation mix, legal research, small and medium-sized enterprises, strategic International law, strategy alignment, house of laws, deployment, production strategy, legal strategy, business strategy

Procedia PDF Downloads 43
1249 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 255
1248 Remote Sensing and Gis Use in Trends of Urbanization and Regional Planning

Authors: Sawan Kumar Jangid

Abstract:

The paper attempts to study various facets of urbanization and regional planning in the framework of the present conditions and future needs. Urbanization is a dynamic system in which development and changes are prominent features; which implies population growth and changes in the primary, secondary and tertiary sector in the economy. Urban population is increasing day by day due to a natural increase in population and migration from rural areas, and the impact is bound to have in urban areas in terms of infrastructure, environment, water supply and other vital resources. For the organized way of planning and monitoring the implementation of Physical urban and regional plans high-resolution satellite imagery is the potential solution. Now the Remote Sensing data is widely used in urban as well as regional planning, infrastructure planning mainly telecommunication and transport network planning, highway development, accessibility to market area development in terms of catchment and population built-up area density. With Remote Sensing it is possible to identify urban growth, which falls outside the formal planning control. Remote Sensing and GIS technique combined together facilitate the planners, in making a decision, for general public and investors to have relevant data for their use in minimum time. This paper sketches out the Urbanization modal for the future development of Urban and Regional Planning. The paper suggests, a dynamic approach towards regional development strategy.

Keywords: development, dynamic, migration, resolution

Procedia PDF Downloads 405
1247 Evaluation of Aquifer Protective Capacity and Soil Corrosivity Using Geoelectrical Method

Authors: M. T. Tsepav, Y. Adamu, M. A. Umar

Abstract:

A geoelectric survey was carried out in some parts of Angwan Gwari, an outskirt of Lapai Local Government Area on Niger State which belongs to the Nigerian Basement Complex, with the aim of evaluating the soil corrosivity, aquifer transmissivity and protective capacity of the area from which aquifer characterisation was made. The G41 Resistivity Meter was employed to obtain fifteen Schlumberger Vertical Electrical Sounding data along profiles in a square grid network. The data were processed using interpex 1-D sounding inversion software, which gives vertical electrical sounding curves with layered model comprising of the apparent resistivities, overburden thicknesses and depth. This information was used to evaluate longitudinal conductance and transmissivities of the layers. The results show generally low resistivities across the survey area and an average longitudinal conductance variation from 0.0237Siemens in VES 6 to 0.1261 Siemens in VES 15 with almost the entire area giving values less than 1.0 Siemens. The average transmissivity values range from 96.45 Ω.m2 in VES 4 to 299070 Ω.m2 in VES 1. All but VES 4 and VES14 had an average overburden greater than 400 Ω.m2, these results suggest that the aquifers are highly permeable to fluid movement within, leading to the possibility of enhanced migration and circulation of contaminants in the groundwater system and that the area is generally corrosive.

Keywords: geoelectric survey, corrosivity, protective capacity, transmissivity

Procedia PDF Downloads 323
1246 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor

Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro

Abstract:

Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.

Keywords: control, DC motor, discrete PID, discrete state feedback

Procedia PDF Downloads 252
1245 Case Study of Mechanised Shea Butter Production in South-Western Nigeria Using the LCA Approach from Gate-to-Gate

Authors: Temitayo Abayomi Ewemoje, Oluwamayowa Oluwafemi Oluwaniyi

Abstract:

Agriculture and food processing, industry are among the largest industrial sectors that uses large amount of energy. Thus, a larger amount of gases from their fuel combustion technologies is being released into the environment. The choice of input energy supply not only directly having affects the environment, but also poses a threat to human health. The study was therefore designed to assess each unit production processes in order to identify hotspots using life cycle assessments (LCA) approach in South-western Nigeria. Data such as machine power rating, operation duration, inputs and outputs of shea butter materials for unit processes obtained at site were used to modelled Life Cycle Impact Analysis on GaBi6 (Holistic Balancing) software. Four scenarios were drawn for the impact assessments. Material sourcing from Kaiama, Scenarios 1, 3 and Minna Scenarios 2, 4 but different heat supply sources (Liquefied Petroleum Gas ‘LPG’ Scenarios 1, 2 and 10.8 kW Diesel Heater, scenarios 3, 4). Modelling of shea butter production on GaBi6 was for 1kg functional unit of shea butter produced and the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) midpoint assessment was tool used to was analyse the life cycle inventories of the four scenarios. Eight categories in all four Scenarios were observed out of which three impact categories; Global Warming Potential (GWP) (0.613, 0.751, 0.661, 0.799) kg CO2¬-Equiv., Acidification Potential (AP) (0.112, 0.132, 0.129, 0.149) kg H+ moles-Equiv., and Smog (0.044, 0.059, 0.049, 0.063) kg O3-Equiv., categories had the greater impacts on the environment in Scenarios 1-4 respectively. Impacts from transportation activities was also seen to contribute more to these environmental impact categories due to large volume of petrol combusted leading to releases of gases such as CO2, CH4, N2O, SO2, and NOx into the environment during the transportation of raw shea kernel purchased. The ratio of transportation distance from Minna and Kaiama to production site was approximately 3.5. Shea butter unit processes with greater impacts in all categories was the packaging, milling and with the churning processes in ascending order of magnitude was identified as hotspots that may require attention. From the 1kg shea butter functional unit, it was inferred that locating production site at the shortest travelling distance to raw material sourcing and combustion of LPG for heating would reduce all the impact categories assessed on the environment.

Keywords: GaBi6, Life cycle assessment, shea butter production, TRACI

Procedia PDF Downloads 297
1244 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival

Procedia PDF Downloads 329
1243 Investigation of Optical, Film Formation and Magnetic Properties of PS Lates/MNPs Composites

Authors: Saziye Ugur

Abstract:

In this study, optical, film formation, morphological and the magnetic properties of a nanocomposite system, composed of polystyrene (PS) latex polymer and core-shell magnetic nanoparticles (MNPs) is presented. Nine different mixtures were prepared by mixing of PS latex dispersion with different amount of MNPs in the range of (0- 100 wt%). PS/MNPs films were prepared from these mixtures on glass substrates by drop casting method. After drying at room temperature, each film sample was separately annealed at temperatures from 100 to 250 °C for 10 min. In order to monitor film formation process, the transmittance of these composites was measured after each annealing step as a function of MNPs content. Below a critical MNPs content (30 wt%), it was found that PS percolates into the MNPs hard phase and forms an interconnected network upon annealing. The transmission results showed above this critical value, PS latexes were no longer film forming at all temperatures. Besides, the PS/MNPs composite films also showed excellent magnetic properties. All composite films showed superparamagnetic behaviors. The saturation magnetisation (Ms) first increased up to 0.014 emu in the range of (0-50) wt% MNPs content and then decreased to 0.010 emu with increasing MNPs content. The highest value of Ms was approximately 0.020 emu and was obtained for the film filled with 85 wt% MNPs content. These results indicated that the optical, film formation and magnetic properties of PS/MNPs composite films can be readily tuned by varying loading content of MNPs nanoparticles.

Keywords: composite film, film formation, magnetic nanoparticles, ps latex, transmission

Procedia PDF Downloads 239
1242 Scientific Linux Cluster for BIG-DATA Analysis (SLBD): A Case of Fayoum University

Authors: Hassan S. Hussein, Rania A. Abul Seoud, Amr M. Refaat

Abstract:

Scientific researchers face in the analysis of very large data sets that is increasing noticeable rate in today’s and tomorrow’s technologies. Hadoop and Spark are types of software that developed frameworks. Hadoop framework is suitable for many Different hardware platforms. In this research, a scientific Linux cluster for Big Data analysis (SLBD) is presented. SLBD runs open source software with large computational capacity and high performance cluster infrastructure. SLBD composed of one cluster contains identical, commodity-grade computers interconnected via a small LAN. SLBD consists of a fast switch and Gigabit-Ethernet card which connect four (nodes). Cloudera Manager is used to configure and manage an Apache Hadoop stack. Hadoop is a framework allows storing and processing big data across the cluster by using MapReduce algorithm. MapReduce algorithm divides the task into smaller tasks which to be assigned to the network nodes. Algorithm then collects the results and form the final result dataset. SLBD clustering system allows fast and efficient processing of large amount of data resulting from different applications. SLBD also provides high performance, high throughput, high availability, expandability and cluster scalability.

Keywords: big data platforms, cloudera manager, Hadoop, MapReduce

Procedia PDF Downloads 343
1241 Mobility-Aware Relay Selection in Two Hop Unmanned Aerial Vehicles Network

Authors: Tayyaba Hussain, Sobia Jangsher, Saqib Ali, Saqib Ejaz

Abstract:

Unmanned Aerial vehicles (UAV’s) have gained great popularity due to their remoteness, ease of deployment and high maneuverability in different applications like real-time surveillance, image capturing, weather atmospheric studies, disaster site monitoring and mapping. These applications can involve a real-time communication with the ground station. However, altitude and mobility possess a few challenges for the communication. UAV’s at high altitude usually require more transmit power. One possible solution can be with the use of multi hops (UAV’s acting as relays) and exploiting the mobility pattern of the UAV’s. In this paper, we studied a relay (UAV’s acting as relays) selection for a reliable transmission to a destination UAV. We exploit the mobility information of the UAV’s to propose a Mobility-Aware Relay Selection (MARS) algorithm with the objective of giving improved data rates. The results are compared with Non Mobility-Aware relay selection scheme and optimal values. Numerical results show that our proposed MARS algorithm gives 6% better achievable data rates for the mobile UAV’s as compared with Non MobilityAware relay selection scheme. On average a decrease of 20.2% in data rate is achieved with MARS as compared with SDP solver in Yalmip.

Keywords: mobility aware, relay selection, time division multiple acess, unmanned aerial vehicle

Procedia PDF Downloads 224
1240 Virtual and Visual Reconstructions in Museum Expositions

Authors: Ekaterina Razuvalova, Konstantin Rudenko

Abstract:

In this article the most successful examples of international visual and virtual reconstructions of historical and culture objects, which are based on informative and communicative technologies, are represented. 3D reconstructions can demonstrate outward appearance, visualize different hypothesis, connected to represented object. Virtual reality can give us any daytime and season, any century and environment. We can see how different people from different countries and different era lived; we can get different information about any object; we can see historical complexes in real city environment, which are damaged or vanished. These innovations confirm the fact, that 3D reconstruction is important in museum development. Considering the most interesting examples of visual and virtual reconstructions, we can notice, that visual reconstruction is a 3D image of different objects, historical complexes, buildings and phenomena. They are constant and we can see them only as momentary objects. And virtual reconstruction is some environment with its own time, rules and phenomena. These reconstructions are continuous; seasons, daytime and natural conditions can change there. They can demonstrate abilities of virtual world existence. In conclusion: new technologies give us opportunities to expand the boundaries of museum space, improve abilities of museum expositions, create emotional atmosphere of game immersion, which can interest visitor. Usage of network sources allows increasing the number of visitors and virtual reconstruction opportunities show creative side of museum business.

Keywords: computer technologies, historical reconstruction, museums, museum expositions, virtual reconstruction

Procedia PDF Downloads 314
1239 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 270
1238 Saudi Human Awareness Needs: A Survey in How Human Causes Errors and Mistakes Leads to Leak Confidential Data with Proposed Solutions in Saudi Arabia

Authors: Amal Hussain Alkhaiwani, Ghadah Abdullah Almalki

Abstract:

Recently human errors have increasingly become a very high factor in security breaches that may affect confidential data, and most of the cyber data breaches are caused by human errors. With one individual mistake, the attacker will gain access to the entire network and bypass the implemented access controls without any immediate detection. Unaware employees will be vulnerable to any social engineering cyber-attacks. Providing security awareness to People is part of the company protection process; the cyber risks cannot be reduced by just implementing technology; the human awareness of security will significantly reduce the risks, which encourage changes in staff cyber-awareness. In this paper, we will focus on Human Awareness, human needs to continue the required security education level; we will review human errors and introduce a proposed solution to avoid the breach from occurring again. Recently Saudi Arabia faced many attacks with different methods of social engineering. As Saudi Arabia has become a target to many countries and individuals, we needed to initiate a defense mechanism that begins with awareness to keep our privacy and protect the confidential data against possible intended attacks.

Keywords: cybersecurity, human aspects, human errors, human mistakes, security awareness, Saudi Arabia, security program, security education, social engineering

Procedia PDF Downloads 136
1237 A Comparative Evaluation of the SIR and SEIZ Epidemiological Models to Describe the Diffusion Characteristics of COVID-19 Polarizing Viewpoints on Online

Authors: Maryam Maleki, Esther Mead, Mohammad Arani, Nitin Agarwal

Abstract:

This study is conducted to examine how opposing viewpoints related to COVID-19 were diffused on Twitter. To accomplish this, six datasets using two epidemiological models, SIR (Susceptible, Infected, Recovered) and SEIZ (Susceptible, Exposed, Infected, Skeptics), were analyzed. The six datasets were chosen because they represent opposing viewpoints on the COVID-19 pandemic. Three of the datasets contain anti-subject hashtags, while the other three contain pro-subject hashtags. The time frame for all datasets is three years, starting from January 2020 to December 2022. The findings revealed that while both models were effective in evaluating the propagation trends of these polarizing viewpoints, the SEIZ model was more accurate with a relatively lower error rate (6.7%) compared to the SIR model (17.3%). Additionally, the relative error for both models was lower for anti-subject hashtags compared to pro-subject hashtags. By leveraging epidemiological models, insights into the propagation trends of polarizing viewpoints on Twitter were gained. This study paves the way for the development of methods to prevent the spread of ideas that lack scientific evidence while promoting the dissemination of scientifically backed ideas.

Keywords: mathematical modeling, epidemiological model, seiz model, sir model, covid-19, twitter, social network analysis, social contagion

Procedia PDF Downloads 43
1236 Utilization of Traditional Medicine for Treatment of Selected Illnesses among Crop-Farming Households in Edo State, Nigeria

Authors: Adegoke A. Adeyelu, Adeola T. Adeyelu, S. D. Y. Alfred, O. O. Fasina

Abstract:

This study examines the use of traditional medicines for the treatment of selected illnesses among crop-farming households in Edo State, Nigeria. A sample size of ninety (90) households were randomly selected for the study. Data were collected with a structured questionnaire alongside focus group discussions (FGD). Result shows that the mean age was 50 years old, the majority (76.7%) of the sampled farmers were below 60 years old. The majority (80.0%) of the farmers were married, about (92.2%) had formal education. It exposes that the majority of the respondents (76.7%) had household size of between 1-10 persons, about 55.6% had spent 11 years and above in crop farming. malaria (8th ), waist pains (7th ), farm injuries ( 6th ), cough (5th), acute headache(4th), skin infection (3rd), typhoid (2nd) and tuberculosis (1st ) were the most and least treated illness. Respondents (80%) had spent N10,000.00 ($27) and less on treatment of illnesses, 8.9% had spent N10,000.00-N20,000.0027 ($27-$55) 4.4% had spent between N20,100-N30,000.00 ($27-$83) while 6.7% had spent more than N30,100.00 ($83) on treatment of illnesses in the last one (1) year prior to the study. Age, years of farming, farm size, household size, level of income, cost of treatment, level of education, social network, and culture are some of the statistically significant factors influencing the utilization of traditional medicine. Farmers should be educated on methods of preventing illnesses, which is far cheaper than the curative.

Keywords: crop farming-households, selected illnesses, traditional medicines, Edo State

Procedia PDF Downloads 180
1235 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 248
1234 Treatment of Greywater at Household by Using Ceramic Tablet Membranes

Authors: Abdelkader T. Ahmed

Abstract:

Greywater is any wastewater draining from a household including kitchen sinks and bathroom tubs, except toilet wastes. Although this used water may contain grease, food particles, hair, and any number of other impurities, it may still be suitable for reuse after treatment. Greywater reusing serves two purposes including reduction the amount of freshwater needed to supply a household, and reduction the amount of wastewater entering sewer systems. This study aims to investigate and design a simple and cheap unit to treat the greywater in household via using ceramic membranes and reuse it in supplying water for toilet flushing. The study include an experimental program for manufacturing several tablet ceramic membranes from clay and sawdust with three different mixtures. The productivity and efficiency of these ceramic membranes were investigated by chemical and physical tests for greywater before and after filtration through these membranes. Then a treatment unit from this ceramic membrane was designed based on the experimental results of lab tests. Results showed that increase sawdust percent with the mixture increase the flow rate and productivity of treated water but decrease in the same time the water quality. The efficiency of the new ceramic membrane reached 95%. The treatment unit save 0.3 m3/day water for toilet flushing without need to consume them from the fresh water supply network.

Keywords: ceramic membranes, filtration, greywater, wastewater treatment

Procedia PDF Downloads 317
1233 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study

Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple

Abstract:

There is a dramatic surge in the adoption of machine learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. With the application of learning methods in such diverse domains, artificial intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been on developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and three defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt machine learning techniques in security-critical areas such as the nuclear industry without rigorous testing since they may be vulnerable to adversarial attacks. While common defence methods can effectively defend against different attacks, none of the three considered can provide protection against all five adversarial attacks analysed.

Keywords: adversarial machine learning, attacks, defences, nuclear industry, crack detection

Procedia PDF Downloads 139
1232 Achieving High Renewable Energy Penetration in Western Australia Using Data Digitisation and Machine Learning

Authors: A. D. Tayal

Abstract:

The energy industry is undergoing significant disruption. This research outlines that, whilst challenging; this disruption is also an emerging opportunity for electricity utilities. One such opportunity is leveraging the developments in data analytics and machine learning. As the uptake of renewable energy technologies and complimentary control systems increases, electricity grids will likely transform towards dense microgrids with high penetration of renewable generation sources, rich in network and customer data, and linked through intelligent, wireless communications. Data digitisation and analytics have already impacted numerous industries, and its influence on the energy sector is growing, as computational capabilities increase to manage big data, and as machines develop algorithms to solve the energy challenges of the future. The objective of this paper is to address how far the uptake of renewable technologies can go given the constraints of existing grid infrastructure and provides a qualitative assessment of how higher levels of renewable energy penetration can be facilitated by incorporating even broader technological advances in the fields of data analytics and machine learning. Western Australia is used as a contextualised case study, given its abundance and diverse renewable resources (solar, wind, biomass, and wave) and isolated networks, making a high penetration of renewables a feasible target for policy makers over coming decades.

Keywords: data, innovation, renewable, solar

Procedia PDF Downloads 349
1231 Distributed Control Strategy for Dispersed Energy Storage Units in the DC Microgrid Based on Discrete Consensus

Authors: Hanqing Yang, Xiang Meng, Qi Li, Weirong Chen

Abstract:

The SOC (state of charge) based droop control has limitations on the load power sharing among different energy storage units, due to the line impedance. In this paper, a distributed control strategy for dispersed energy storage units in the DC microgrid based on discrete consensus is proposed. Firstly, a sparse information communication network is built. Thus, local controllers can communicate with its neighbors using voltage, current and SOC information. An average voltage of grid can be evaluated to compensate voltage offset by droop control, and an objective virtual resistance fulfilling above requirement can be dynamically calculated to distribute load power according to the SOC of the energy storage units. Then, the stability of the whole system and influence of communication delay are analyzed. It can be concluded that this control strategy can improve the robustness and flexibility, because of having no center controller. Finally, a model of DC microgrid with dispersed energy storage units and loads is built, the discrete distributed algorithm is established and communication protocol is developed. The co-simulation between Matlab/Simulink and JADE (Java agent development framework) has verified the effectiveness of proposed control strategy.

Keywords: dispersed energy storage units, discrete consensus algorithm, state of charge, communication delay

Procedia PDF Downloads 258
1230 Ecosystems: An Analysis of Generation Z News Consumption, Its Impact on Evolving Concepts and Applications in Journalism

Authors: Bethany Wood

Abstract:

The world pandemic led to a change in the way social media was used by audiences, with young people spending more hours on the platform due to lockdown. Reports by Ofcom have demonstrated that the internet is the second most popular platform for accessing news after television in the UK with social media and the internet ranked as the most popular platform to access news for those aged between 16-24. These statistics are unsurprising considering that at the time of writing, 98 percent of Generation Z (Gen Z) owned a smartphone and the subsequent ease and accessibility of social media. Technology is constantly developing and with this, its importance is becoming more prevalent with each generation: the Baby Boomers (1946-1964) consider it something useful whereas millennials (1981-1997) believe it a necessity for day to day living. Gen Z, otherwise known as the digital native, have grown up with this technology at their fingertips and social media is a norm. It helps form their identity, their affiliations and opens gateways for them to engage with news in a new way. It is a common misconception that Gen Z do not consume news, they are simply doing so in a different way to their predecessors. Using a sample of 800 18-20 year olds whilst utilising Generational theory, Actor Network Theory and the Social Shaping of Technology, this research provides a critical analyse regarding how Gen Z’s news consumption and engagement habits are developing along with technology to sculpture the future format of news and its distribution. From that perspective, allied with the empirical approach, it is possible to provide research orientated advice for the industry and even help to redefine traditional concepts of journalism.

Keywords: journalism, generation z, digital, social media

Procedia PDF Downloads 59
1229 Water Quality in Buyuk Menderes Graben, Turkey

Authors: Tugbanur Ozen Balaban, Gultekin Tarcan, Unsal Gemici, Mumtaz Colak, I. Hakki Karamanderesi

Abstract:

Buyuk Menderes Graben is located in the Western Anatolia (Turkey). The graben has become the largest industrial and agricultural area with a total population exceeding 3.000.000. There are two big cities within the study areas from west to east as Aydın and Denizli. The study area is very rich with regard to cold ground waters and thermal waters. Electrical production using geothermal potential has become very popular in the last decades in this area. Buyuk Menderes Graben is a tectonically active extensional region and is undergoing a north–south extensional tectonic regime which commenced at the latest during Early Middle Miocene period. The basement of the study area consists of Menderes massif rocks that are made up of high-to low-grade metamorphics and they are aquifer for both cold ground waters and thermal waters depending on the location. Neogene terrestrial sediments, which are mainly composed by alluvium fan deposits unconformably cover the basement rocks in different facies have very low permeability and locally may act as cap rocks for the geothermal systems. The youngest unit is Quaternary alluvium which is the shallow regional aquifer consists of Holocene alluvial deposits in the study area. All the waters are of meteoric origin and reflect shallow or deep circulation according to the 8O, 2H and 3H contents. Meteoric waters move to deep zones by fractured system and rise to the surface along the faults. Water samples (drilling well, spring and surface waters) and local seawater were collected between 2010 and 2012 years. Geochemical modeling was calculated distribution of the aqueous species and exchange processes by using PHREEQCi speciation code. Geochemical analyses show that cold ground water types are evolving from Ca–Mg–HCO3 to Na–Cl–SO4 and geothermal aquifer waters reflect the water types of Na-Cl-HCO3 in Aydın. Water types of Denizli are Ca-Mg-HCO3 and Ca-Mg-HCO3-SO4. Thermal water types reflect generally Na-HCO3-SO4. The B versus Cl rates increase from east to west with the proportion of seawater introduced into the fresh water aquifers and geothermal reservoirs. Concentrations of some elements (As, B, Fe and Ni) are higher than the tolerance limit of the drinking water standard of Turkey (TS 266) and international drinking water standards (WHO, FAO etc).

Keywords: Buyuk Menderes, isotope chemistry, geochemical modelling, water quality

Procedia PDF Downloads 522
1228 Lessons Learned from Covid19 - Related ERT in Universities

Authors: Sean Gay, Cristina Tat

Abstract:

This presentation will detail how a university in Western Japan has implemented its English for Academic Purposes (EAP) program during the onset of CoViD-19 in the spring semester of 2020. In the spring semester of 2020, after a 2 week delay, all courses within the School of Policy Studies EAP Program at Kwansei Gakuin University were offered in an online asynchronous format. The rationale for this decision was not to disadvantage students who might not have access to devices necessary for taking part in synchronous online lessons. The course coordinators were tasked with consolidating the materials originally designed for face-to-face14 week courses for a 12 week asynchronous online semester and with uploading the modified course materials to Luna, the university’s network, which is a modified version of Blackboard. Based on research to determine the social and academic impacts of this CoViD-19 ERT approach on the students who took part in this EAP program, this presentation explains how future curriculum design and implementation can be managed in a post-CoViD world. There are a wide variety of lessons that were salient. The role of the classroom as a social institution was very prominent; however, awareness of cognitive burdens and strategies to mitigate that burden may be more valuable for teachers. The lessons learned during this period of ERT can help teachers moving forward.

Keywords: asynchronous online learning, emergency remote teaching (ERT), online curriculum design, synchronous online learning

Procedia PDF Downloads 188
1227 Phishing Detection: Comparison between Uniform Resource Locator and Content-Based Detection

Authors: Nuur Ezaini Akmar Ismail, Norbazilah Rahim, Norul Huda Md Rasdi, Maslina Daud

Abstract:

A web application is the most targeted by the attacker because the web application is accessible by the end users. It has become more advantageous to the attacker since not all the end users aware of what kind of sensitive data already leaked by them through the Internet especially via social network in shake on ‘sharing’. The attacker can use this information such as personal details, a favourite of artists, a favourite of actors or actress, music, politics, and medical records to customize phishing attack thus trick the user to click on malware-laced attachments. The Phishing attack is one of the most popular attacks for social engineering technique against web applications. There are several methods to detect phishing websites such as Blacklist/Whitelist based detection, heuristic-based, and visual similarity-based detection. This paper illustrated a comparison between the heuristic-based technique using features of a uniform resource locator (URL) and visual similarity-based detection techniques that compares the content of a suspected phishing page with the legitimate one in order to detect new phishing sites based on the paper reviewed from the past few years. The comparison focuses on three indicators which are false positive and negative, accuracy of the method, and time consumed to detect phishing website.

Keywords: heuristic-based technique, phishing detection, social engineering and visual similarity-based technique

Procedia PDF Downloads 162
1226 Competitiveness of a Share Autonomous Electrical Vehicle Fleet Compared to Traditional Means of Transport: A Case Study for Transportation Network Companies

Authors: Maximilian Richter

Abstract:

Implementing shared autonomous electric vehicles (SAEVs) has many advantages. The main advantages are achieved when SAEVs are offered as on-demand services by a fleet operator. However, autonomous mobility on demand (AMoD) will be distributed nationwide only if a fleet operation is economically profitable for the operator. This paper proposes a microscopic approach to modeling two implementation scenarios of an AMoD fleet. The city of Zurich is used as a case study, with the results and findings being generalizable to other similar European and North American cities. The data are based on the traffic model of the canton of Zurich (Gesamtverkehrsmodell des Kantons Zürich (GVM-ZH)). To determine financial profitability, demand is based on the simulation results and combined with analyzing the costs of a SAEV per kilometer. The results demonstrate that depending on the scenario; journeys can be offered profitably to customers for CHF 0.3 up to CHF 0.4 per kilometer. While larger fleets allowed for lower price levels and increased profits in the long term, smaller fleets exhibit elevated efficiency levels and profit opportunities per day. The paper concludes with recommendations for how fleet operators can prepare themselves to maximize profit in the autonomous future.

Keywords: autonomous vehicle, mobility on demand, traffic simulation, fleet provider

Procedia PDF Downloads 110
1225 Thermal Comfort in Office Rooms in a Historic Building with Modernized Heating, Ventilation and Air Conditioning Systems

Authors: Hossein Bakhtiari, Mathias Cehlin, Jan Akander

Abstract:

Envelopes with low thermal performance is a common characteristic in many European historic buildings which leads to higher energy demand for heating and cooling as well as insufficient thermal comfort for the occupants. This paper presents the results of a study on the thermal comfort in the City Hall (Rådhuset) in Gävle, Sweden. This historic building is currently used as an office building. It is equipped with two relatively modern mechanical heat recovery ventilation systems with displacement ventilation supply devices in the offices. The district heating network heats the building via pre-heat supply air and radiators. Summer cooling comes from an electric heat pump that rejects heat into the exhaust ventilation air. A building management system controls HVAC equipment (heating, ventilation and air conditioning). The methodology is based on on-site measurements, data logging on the management system and evaluating the occupants’ perception of a summer and a winter period indoor environment using a standardized questionnaire. The main aim of the study is to investigate whether or not it is enough to have modernized HVAC systems to get adequate thermal comfort in a historic building with poor envelope performance used as an office building in Nordic climate conditions.

Keywords: historic buildings, on-site measurements, standardized questionnaire, thermal comfort

Procedia PDF Downloads 357
1224 The Effect of Molecular Weight on the Cross-Linking of Two Different Molecular Weight LLDPE Samples

Authors: Ashkan Forootan, Reza Rashedi

Abstract:

Polyethylene has wide usage areas such as blow molding, pipe, film, cable insulation. However, regardless to its growing applications, it has some constraints such as the limited 70C operating temperature. Polyethylene thermo setting procedure whose molecules are knotted and 3D-molecular-network formed , is developed to conquer the above problem and to raise the applicable temperature of the polymer. This paper reports the cross-linking for two different molecular weight grades of LLDPE by adding 0.5, 1, and 2% of DCP (Dicumyl Peroxide). DCP was chosen for its prevalence among various cross-linking agents. Structural parameters such as molecular weight, melt flow index, comonomer, number of branches,etc. were obtained through the use of relative tests as Gel Permeation Chromatography and Fourier Transform Infra Red spectrometer. After calculating the percentage of gel content, properties of the pure and cross-linked samples were compared by thermal and mechanical analysis with DMTA and FTIR and the effects of cross-linking like viscous and elastic modulus were discussed by using various structural paprameters such as MFI, molecular weight, short chain branches, etc. Studies showed that cross-linked polymer, unlike the pure one, had a solid state with thermal mechanical properties in the range of 110 to 120C and this helped overcome the problem of using polyethylene in temperatures near the melting point.

Keywords: LLDPE, cross-link, structural parameters, DCP, DMTA, GPC

Procedia PDF Downloads 290
1223 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 53
1222 The Ontological Memory in Bergson as a Conceptual Tool for the Analysis of the Digital Conjuncture

Authors: Douglas Rossi Ramos

Abstract:

The current digital conjuncture, called by some authors as 'Internet of Things' (IoT), 'Web 2.0' or even 'Web 3.0', consists of a network that encompasses any communication of objects and entities, such as data, information, technologies, and people. At this juncture, especially characterized by an "object socialization," communication can no longer be represented as a simple informational flow of messages from a sender, crossing a channel or medium, reaching a receiver. The idea of communication must, therefore, be thought of more broadly in which it is possible to analyze the process communicative from interactions between humans and nonhumans. To think about this complexity, a communicative process that encompasses both humans and other beings or entities communicating (objects and things), it is necessary to constitute a new epistemology of communication to rethink concepts and notions commonly attributed to humans such as 'memory.' This research aims to contribute to this epistemological constitution from the discussion about the notion of memory according to the complex ontology of Henri Bergson. Among the results (the notion of memory in Bergson presents itself as a conceptual tool for the analysis of posthumanism and the anthropomorphic conjuncture of the new advent of digital), there was the need to think about an ontological memory, analyzed as a being itself (being itself of memory), as a strategy for understanding the forms of interaction and communication that constitute the new digital conjuncture, in which communicating beings or entities tend to interact with each other. Rethinking the idea of communication beyond the dimension of transmission in informative sequences paves the way for an ecological perspective of the digital dwelling condition.

Keywords: communication, digital, Henri Bergson, memory

Procedia PDF Downloads 142