Search results for: events
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 418

Search results for: events

118 Detection of Linkages Between Extreme Flow Measures and Climate Indices

Authors: Mohammed Sharif, Donald Burn

Abstract:

Large scale climate signals and their teleconnections can influence hydro-meteorological variables on a local scale. Several extreme flow and timing measures, including high flow and low flow measures, from 62 hydrometric stations in Canada are investigated to detect possible linkages with several large scale climate indices. The streamflow data used in this study are derived from the Canadian Reference Hydrometric Basin Network and are characterized by relatively pristine and stable land-use conditions with a minimum of 40 years of record. A composite analysis approach was used to identify linkages between extreme flow and timing measures and climate indices. The approach involves determining the 10 highest and 10 lowest values of various climate indices from the data record. Extreme flow and timing measures for each station were examined for the years associated with the 10 largest values and the years associated with the 10 smallest values. In each case, a re-sampling approach was applied to determine if the 10 values of extreme flow measures differed significantly from the series mean. Results indicate that several stations are impacted by the large scale climate indices considered in this study. The results allow the determination of any relationship between stations that exhibit a statistically significant trend and stations for which the extreme measures exhibit a linkage with the climate indices.

Keywords: flood analysis, low-flow events, climate change, trend analysis, Canada

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
117 Variation of Streamwise and Vertical Turbulence Intensity in a Smooth and Rough Bed Open Channel Flow

Authors: Md Abdullah Al Faruque, Ram Balachandar

Abstract:

An experimental study with four different types of bed conditions was carried out to understand the effect of roughness in open channel flow at two different Reynolds numbers. The bed conditions include a smooth surface and three different roughness conditions, which were generated using sand grains with a median diameter of 2.46 mm. The three rough conditions include a surface with distributed roughness, a surface with continuously distributed roughness and a sand bed with a permeable interface. A commercial two-component fibre-optic LDA system was used to conduct the velocity measurements. The variables of interest include the mean velocity, turbulence intensity, correlation between the streamwise and the wall normal turbulence, Reynolds shear stress and velocity triple products. Quadrant decomposition was used to extract the magnitude of the Reynolds shear stress of the turbulent bursting events. The effect of roughness was evident throughout the flow depth. The results show that distributed roughness has the greatest roughness effect followed by the sand bed and the continuous roughness. Compared to the smooth bed, the streamwise turbulence intensity reduces but the vertical turbulence intensity increases at a location very close to the bed due to the introduction of roughness. Although the same sand grain is used to create the three different rough bed conditions, the difference in the turbulence intensity is an indication that the specific geometry of the roughness has an influence on turbulence structure.

Keywords: Open channel flow, smooth bed, rough bed, Reynolds number, turbulence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2194
116 Probability and Instruction Effects in Syllogistic Conditional Reasoning

Authors: Olimpia Matarazzo, Ivana Baldassarre

Abstract:

The main aim of this study was to examine whether people understand indicative conditionals on the basis of syntactic factors or on the basis of subjective conditional probability. The second aim was to investigate whether the conditional probability of q given p depends on the antecedent and consequent sizes or derives from inductive processes leading to establish a link of plausible cooccurrence between events semantically or experientially associated. These competing hypotheses have been tested through a 3 x 2 x 2 x 2 mixed design involving the manipulation of four variables: type of instructions (“Consider the following statement to be true", “Read the following statement" and condition with no conditional statement); antecedent size (high/low); consequent size (high/low); statement probability (high/low). The first variable was between-subjects, the others were within-subjects. The inferences investigated were Modus Ponens and Modus Tollens. Ninety undergraduates of the Second University of Naples, without any prior knowledge of logic or conditional reasoning, participated in this study. Results suggest that people understand conditionals in a syntactic way rather than in a probabilistic way, even though the perception of the conditional probability of q given p is at least partially involved in the conditionals- comprehension. They also showed that, in presence of a conditional syllogism, inferences are not affected by the antecedent or consequent sizes. From a theoretical point of view these findings suggest that it would be inappropriate to abandon the idea that conditionals are naturally understood in a syntactic way for the idea that they are understood in a probabilistic way.

Keywords: Conditionals, conditional probability, conditional syllogism, inferential task.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
115 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 672
114 Gene Expressions Associated with Ultrastructural Changes in Vascular Endothelium of Atherosclerotic Lesion

Authors: M. Maimunah, G.A. Froemming, H. Nawawi, M.I. Nafeeza, O. Effat, M.R. Rohayu Izanwati, M.S. Mohamed Saifulaman

Abstract:

Attachment of the circulating monocytes to the endothelium is the earliest detectable events during formation of atherosclerosis. The adhesion molecules, chemokines and matrix proteases genes were identified to be expressed in atherogenesis. Expressions of these genes may influence structural integrity of the luminal endothelium. The aim of this study is to relate changes in the ultrastructural morphology of the aortic luminal surface and gene expressions of the endothelial surface, chemokine and MMP-12 in normal and hypercholesterolemic rabbits. Luminal endothelial surface from rabbit aortic tissue was examined by scanning electron microscopy (SEM) using low vacuum mode to ascertain ultrastructural changes in development of atherosclerotic lesion. Gene expression of adhesion molecules, MCP-1 and MMP-12 were studied by Real-time PCR. Ultrastructural observations of the aortic luminal surface exhibited changes from normal regular smooth intact endothelium to irregular luminal surface including marked globular appearance and ruptures of the membrane layer. Real-time PCR demonstrated differentially expressed of studied genes in atherosclerotic tissues. The appearance of ultrastructural changes in aortic tissue of hypercholesterolemic rabbits is suggested to have relation with underlying changes of endothelial surface molecules, chemokine and MMP-12 gene expressions.

Keywords: Ultrastructure of luminal endothelial surface, Macrophage metalloelastase (MMP-12), Real-time PCR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1515
113 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Authors: H. Aldousari, T. Buchacher, N. M. Spyrou

Abstract:

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855
112 Estimation of Thermal Conductivity of Nanofluids Using MD-Stochastic Simulation Based Approach

Authors: Sujoy Das, M. M. Ghosh

Abstract:

The thermal conductivity of a fluid can be significantly enhanced by dispersing nano-sized particles in it, and the resultant fluid is termed as "nanofluid". A theoretical model for estimating the thermal conductivity of a nanofluid has been proposed here. It is based on the mechanism that evenly dispersed nanoparticles within a nanofluid undergo Brownian motion in course of which the nanoparticles repeatedly collide with the heat source. During each collision a rapid heat transfer occurs owing to the solidsolid contact. Molecular dynamics (MD) simulation of the collision of nanoparticles with the heat source has shown that there is a pulselike pick up of heat by the nanoparticles within 20-100 ps, the extent of which depends not only on thermal conductivity of the nanoparticles, but also on the elastic and other physical properties of the nanoparticle. After the collision the nanoparticles undergo Brownian motion in the base fluid and release the excess heat to the surrounding base fluid within 2-10 ms. The Brownian motion and associated temperature variation of the nanoparticles have been modeled by stochastic analysis. Repeated occurrence of these events by the suspended nanoparticles significantly contributes to the characteristic thermal conductivity of the nanofluids, which has been estimated by the present model for a ethylene glycol based nanofluid containing Cu-nanoparticles of size ranging from 8 to 20 nm, with Gaussian size distribution. The prediction of the present model has shown a reasonable agreement with the experimental data available in literature.

Keywords: Brownian dynamics, Molecular dynamics, Nanofluid, Thermal conductivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218
111 Twitter Sentiment Analysis during the Lockdown on New Zealand

Authors: Smah Doeban Almotiri

Abstract:

One of the most common fields of natural language processing (NLP) is sentimental analysis. The inferred feeling in the text can be successfully mined for various events using sentiment analysis. Twitter is viewed as a reliable data point for sentimental analytics studies since people are using social media to receive and exchange different types of data on a broad scale during the COVID-19 epidemic. The processing of such data may aid in making critical decisions on how to keep the situation under control. The aim of this research is to look at how sentimental states differed in a single geographic region during the lockdown at two different times.1162 tweets were analyzed related to the COVID-19 pandemic lockdown using keywords hashtags (lockdown, COVID-19) for the first sample tweets were from March 23, 2020, until April 23, 2020, and the second sample for the following year was from March 1, 2021, until April 4, 2021. Natural language processing (NLP), which is a form of Artificial intelligent was used for this research to calculate the sentiment value of all of the tweets by using AFINN Lexicon sentiment analysis method. The findings revealed that the sentimental condition in both different times during the region's lockdown was positive in the samples of this study, which are unique to the specific geographical area of New Zealand. This research suggests applied machine learning sentimental method such as Crystal Feel and extended the size of the sample tweet by using multiple tweets over a longer period of time.

Keywords: sentiment analysis, Twitter analysis, lockdown, Covid-19, AFINN, NodeJS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 471
110 Artificial Intelligence: A Comprehensive and Systematic Literature Review of Applications and Comparative Technologies

Authors: Z. M. Najmi

Abstract:

Over the years, the question around Artificial Intelligence has always been one with many answers. Whether by means of use in business and industry or complicated algorithmic programming, management of these technologies has always been the core focus. More recently, technologies have been questioned in industry and society alike as to whether they have improved human-centred design, assisted choices and objectives, and had a hand in systematic processes across the board. With these questions the answer may lie within AI technologies, and the steps needed in removing common human error. Elements such as Machine Learning, Deep Learning, Recommender Systems and Natural Language Processing will all be features to consider moving forward. Our previous intervention with AI applications has resulted in increased productivity, however, raised concerns for the continuation of traditional human-centred occupations. Emerging technologies such as Augmented Reality and Virtual Reality have all played a part in this during AI’s prominent rise. As mentioned, AI has been constantly under the microscope; the benefits and drawbacks may seem endless is wide, but AI is something we must take notice of and adapt into our everyday lives. The aim of this paper is to give an overview of the technologies surrounding A.I. and its’ related technologies. A comprehensive review has been written as a timeline of the developing events and key points in the history of Artificial Intelligence. This research is gathered entirely from secondary research, academic statements of knowledge and gathered to produce an understanding of the timeline of AI.

Keywords: Artificial Intelligence, Deep Learning, Augmented Reality, Reinforcement Learning, Machine Learning, Supervised Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 470
109 Performance Evaluation of a Prioritized, Limited Multi-Server Processor-Sharing System That Includes Servers with Various Capacities

Authors: Yoshiaki Shikata, Nobutane Hanayama

Abstract:

We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.

Keywords: Processor sharing, multi-server, various capacity, N priority classes, routing strategy, loss probability, mean sojourn time, mean waiting time, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1000
108 Performance Evaluation of Prioritized Limited Processor-Sharing System

Authors: Yoshiaki Shikata, Wataru Katagiri, Yoshitaka Takahashi

Abstract:

We propose a novel prioritized limited processor-sharing (PS) rule and a simulation algorithm for the performance evaluation of this rule. The performance measures of practical interest are evaluated using this algorithm. Suppose that there are two classes and that an arriving (class-1 or class-2) request encounters n1 class-1 and n2 class-2 requests (including the arriving one) in a single-server system. According to the proposed rule, class-1 requests individually and simultaneously receive m / (m * n1+ n2) of the service-facility capacity, whereas class-2 requests receive 1 / (m *n1 + n2) of it, if m * n1 + n2 ≤ C. Otherwise (m * n1 + n2 > C), the arriving request will be queued in the corresponding class waiting room or rejected. Here, m (1) denotes the priority ratio, and C ( ∞), the service-facility capacity. In this rule, when a request arrives at [or departs from] the system, the extension [shortening] of the remaining sojourn time of each request receiving service can be calculated using the number of requests of each class and the priority ratio. Employing a simulation program to execute these events and calculations enables us to analyze the performance of the proposed prioritized limited PS rule, which is realistic in a time-sharing system (TSS) with a sufficiently small time slot. Moreover, this simulation algorithm is expanded for the evaluation of the prioritized limited PS system with N  3 priority classes.

Keywords: PS rule, priority ratio, service-facility capacity, simulation algorithm, sojourn time, performance measures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1145
107 Hydrological Characterization of a Watershed for Streamflow Prediction

Authors: Oseni Taiwo Amoo, Bloodless Dzwairo

Abstract:

In this paper, we extend the versatility and usefulness of GIS as a methodology for any river basin hydrologic characteristics analysis (HCA). The Gurara River basin located in North-Central Nigeria is presented in this study. It is an on-going research using spatial Digital Elevation Model (DEM) and Arc-Hydro tools to take inventory of the basin characteristics in order to predict water abstraction quantification on streamflow regime. One of the main concerns of hydrological modelling is the quantification of runoff from rainstorm events. In practice, the soil conservation service curve (SCS) method and the Conventional procedure called rational technique are still generally used these traditional hydrological lumped models convert statistical properties of rainfall in river basin to observed runoff and hydrograph. However, the models give little or no information about spatially dispersed information on rainfall and basin physical characteristics. Therefore, this paper synthesizes morphometric parameters in generating runoff. The expected results of the basin characteristics such as size, area, shape, slope of the watershed and stream distribution network analysis could be useful in estimating streamflow discharge. Water resources managers and irrigation farmers could utilize the tool for determining net return from available scarce water resources, where past data records are sparse for the aspect of land and climate.

Keywords: Hydrological characteristic, land and climate, runoff discharge, streamflow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1410
106 Application of Java-based Pointcuts in Aspect Oriented Programming (AOP) for Data Race Detection

Authors: Sadaf Khalid, Fahim Arif

Abstract:

Wide applicability of concurrent programming practices in developing various software applications leads to different concurrency errors amongst which data race is the most important. Java provides greatest support for concurrent programming by introducing various concurrency packages. Aspect oriented programming (AOP) is modern programming paradigm facilitating the runtime interception of events of interest and can be effectively used to handle the concurrency problems. AspectJ being an aspect oriented extension to java facilitates the application of concepts of AOP for data race detection. Volatile variables are usually considered thread safe, but they can become the possible candidates of data races if non-atomic operations are performed concurrently upon them. Various data race detection algorithms have been proposed in the past but this issue of volatility and atomicity is still unaddressed. The aim of this research is to propose some suggestions for incorporating certain conditions for data race detection in java programs at the volatile fields by taking into account support for atomicity in java concurrency packages and making use of pointcuts. Two simple test programs will demonstrate the results of research. The results are verified on two different Java Development Kits (JDKs) for the purpose of comparison.

Keywords: Aspect Bench Compiler (abc), Aspect OrientedProgramming (AOP), AspectJ, Aspects, Concurrency packages, Concurrent programming, Cross-cutting Concerns, Data race, Eclipse, Java, Java Development Kits (JDKs), Pointcuts

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
105 Standalone Docking Station with Combined Charging Methods for Agricultural Mobile Robots

Authors: Leonor Varandas, Pedro D. Gaspar, Martim L. Aguiar

Abstract:

One of the biggest concerns in the field of agriculture is around the energy efficiency of robots that will perform agriculture’s activity and their charging methods. In this paper, two different charging methods for agricultural standalone docking stations are shown that will take into account various variants as field size and its irregularities, work’s nature to which the robot will perform, deadlines that have to be respected, among others. Its features also are dependent on the orchard, season, battery type and its technical specifications and cost. First charging base method focuses on wireless charging, presenting more benefits for small field. The second charging base method relies on battery replacement being more suitable for large fields, thus avoiding the robot stop for recharge. Existing many methods to charge a battery, the CC CV was considered the most appropriate for either simplicity or effectiveness. The choice of the battery for agricultural purposes is if most importance. While the most common battery used is Li-ion battery, this study also discusses the use of graphene-based new type of batteries with 45% over capacity to the Li-ion one. A Battery Management Systems (BMS) is applied for battery balancing. All these approaches combined showed to be a promising method to improve a lot of technical agricultural work, not just in terms of plantation and harvesting but also about every technique to prevent harmful events like plagues and weeds or even to reduce crop time and cost.

Keywords: Agricultural mobile robot, charging base methods, battery replacement method, wireless charging method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 779
104 Unpacking Tourist Experience: A Case Study of Chinese Tourists Visiting the UK

Authors: Guanhao Tong, Li Li, Ben David

Abstract:

This study aims to provide an explanatory account of how the leisure tourist experience emerges from tourists and their surroundings through a critical realist lens. This was achieved by applying Archer’s realist social theory as the underlying theoretical ground to unpack the interplays between the external (tourism system or structure) and the internal (tourists or agency) factors. This theory argues that social phenomena can be analysed in three domains - structure, agency, and culture (SAC), and along three phases – structure conditioning, sociocultural interactions, and structure elaboration. From the realist perspective, the world is an open system; events and discourses are irreducible to present individuals and collectivities. Therefore, identifying the processes or mechanisms is key to help researchers understand how social reality is brought about. Based on the contextual nature of the tourist experience, the research focuses on Chinese tourists (from mainland China) to London as a destination and British culture conveyed through the concept of the destination image. This study uses an intensive approach based on Archer’s M/M approach to discover the mechanisms/processes of the emergence of the tourist experience. Individual interviews were conducted to reveal the underlying causes of lived experiences of the tourists. Secondary data were also collected to understand how British destinations are portrayed to Chinese tourists.

Keywords: Chinese Tourists, Destination Image, M/M Approach, Realist Social Theory, social mechanisms, tourist experience.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 96
103 Analysis of the Elastic Energy Released and Characterization of the Eruptive Episodes Intensity’s during 2014-2015 at El Reventador Volcano, Ecuador

Authors: Paúl I. Cornejo

Abstract:

The elastic energy released through Strombolian explosions has been quite studied, detailing various processes, sources, and precursory events at several volcanoes. We realized an analysis based on the relative partitioning of the elastic energy radiated into the atmosphere and ground by Strombolian-type explosions recorded at El Reventador volcano, using infrasound and seismic signals at high and moderate seismicity episodes during intense eruptive stages of explosive and effusive activity. Our results show that considerable values of Volcano Acoustic-Seismic Ratio (VASR or η) are obtained at high seismicity stages. VASR is a physical diagnostic of explosive degassing that we used to compare eruption mechanisms at El Reventador volcano for two datasets of explosions recorded at a Broad-Band BB seismic and infrasonic station located at ~5 kilometers from the vent. We conclude that the acoustic energy EA released during explosive activity (VASR η = 0.47, standard deviation σ = 0.8) is higher than the EA released during effusive activity; therefore, producing the highest values of η. Furthermore, we realized the analysis and characterization of the eruptive intensity for two episodes at high seismicity, calculating a η three-time higher for an episode of effusive activity with an occasional explosive component (η = 0.32, and σ = 0.42), than a η for an episode of only effusive activity (η = 0.11, and σ = 0.18), but more energetic.

Keywords: Effusive, explosion quakes, explosive, strombolian, VASR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 741
102 Performance Evaluation of a Limited Round-Robin System

Authors: Yoshiaki Shikata

Abstract:

Performance of a limited Round-Robin (RR) rule is studied in order to clarify the characteristics of a realistic sharing model of a processor. Under the limited RR rule, the processor allocates to each request a fixed amount of time, called a quantum, in a fixed order. The sum of the requests being allocated these quanta is kept below a fixed value. Arriving requests that cannot be allocated quanta because of such a restriction are queued or rejected. Practical performance measures, such as the relationship between the mean sojourn time, the mean number of requests, or the loss probability and the quantum size are evaluated via simulation. In the evaluation, the requested service time of an arriving request is converted into a quantum number. One of these quanta is included in an RR cycle, which means a series of quanta allocated to each request in a fixed order. The service time of the arriving request can be evaluated using the number of RR cycles required to complete the service, the number of requests receiving service, and the quantum size. Then an increase or decrease in the number of quanta that are necessary before service is completed is reevaluated at the arrival or departure of other requests. Tracking these events and calculations enables us to analyze the performance of our limited RR rule. In particular, we obtain the most suitable quantum size, which minimizes the mean sojourn time, for the case in which the switching time for each quantum is considered.

Keywords: Limited RR rule, quantum, processor sharing, sojourn time, performance measures, simulation, loss probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1213
101 The Taiwanese Institutional Arrangement for Coastal Management Due to Climate Change

Authors: Wen-Hong Liu, Hao-Tang Jhan, Kun-Lung Lin, Meng-Tsung Lee

Abstract:

Weather disaster events were frequent and caused loss of lives and property in Taiwan recently. Excessive concentration of population and lacking of integrated planning led to Taiwanese coastal zone face the impacts of climate change directly. Comparing to many countries which have already set up legislation, competent authorities and national adaptation strategies, the ability of coastal management adapting to climate change is still insufficient in Taiwan. Therefore, it is necessary to establish a complete institutional arrangement for coastal management due to climate change in order to protect environment and sustain socio-economic development. This paper firstly reviews the impact of climate change on Taiwanese coastal zone. Secondly, development of Taiwanese institutional arrangement of coastal management is introduced. Followed is the analysis of four dimensions of legal basis, competent authority, scientific and financial support and international cooperations of institutional arrangement. The results show that Taiwanese government shall: 1) integrate climate change issue into Coastal Act, Wetland Act and territorial planning Act and pass them; 2) establish the high level competent authority for coastal management; 3) set up the climate change disaster coordinate platform; 4) link scientific information and decision markers; 5) establish the climate change adjustment fund; 6) participate in international climate change organizations and meetings actively; 7) cooperate with near countries to exchange experiences.

Keywords: Climate Change, Coastal Zone Management, Institution Arrangement, Adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919
100 Resilience Assessment for Power Distribution Systems

Authors: Berna Eren Tokgoz, Mahdi Safa, Seokyon Hwang

Abstract:

Power distribution systems are essential and crucial infrastructures for the development and maintenance of a sustainable society. These systems are extremely vulnerable to various types of natural and man-made disasters. The assessment of resilience focuses on preparedness and mitigation actions under pre-disaster conditions. It also concentrates on response and recovery actions under post-disaster situations. The aim of this study is to present a methodology to assess the resilience of electric power distribution poles against wind-related events. The proposed methodology can improve the accuracy and rapidity of the evaluation of the conditions and the assessment of the resilience of poles. The methodology provides a metric for the evaluation of the resilience of poles under pre-disaster and post-disaster conditions. The metric was developed using mathematical expressions for physical forces that involve various variables, such as physical dimensions of the pole, the inclination of the pole, and wind speed. A three-dimensional imaging technology (photogrammetry) was used to determine the inclination of poles. Based on expert opinion, the proposed metric was used to define zones to visualize resilience. Visual representation of resilience is helpful for decision makers to prioritize their resources before and after experiencing a wind-related disaster. Multiple electric poles in the City of Beaumont, TX were used in a case study to evaluate the proposed methodology.  

Keywords: Photogrammetry, power distribution systems, resilience metric, system resilience, wind-related disasters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
99 Tide Contribution in the Flood Event of Jeddah City: Mathematical Modelling and Different Field Measurements of the Groundwater Rise

Authors: Aïssa Rezzoug

Abstract:

This paper is aimed to bring new elements that demonstrate the tide caused the groundwater to rise in the shoreline band, on which the urban areas occurs, especially in the western coastal cities of the Kingdom of Saudi Arabia like Jeddah. The reason for the last events of Jeddah inundation was the groundwater rise in the city coupled at the same time to a strong precipitation event. This paper will illustrate the tide participation in increasing the groundwater level significantly. It shows that the reason for internal groundwater recharge within the urban area is not only the excess of the water supply coming from surrounding areas, due to the human activity, with lack of sufficient and efficient sewage system, but also due to tide effect. The research study follows a quantitative method to assess groundwater level rise risks through many in-situ measurements and mathematical modelling. The proposed approach highlights groundwater level, in the urban areas of the city on the shoreline band, reaching the high tide level without considering any input from precipitation. Despite the small tide in the Red Sea compared to other oceanic coasts, the groundwater level is considerably enhanced by the tide from the seaside and by the freshwater table from the landside of the city. In these conditions, the groundwater level becomes high in the city and prevents the soil to evacuate quickly enough the surface flow caused by the storm event, as it was observed in the last historical flood catastrophe of Jeddah in 2009.

Keywords: Flood, groundwater rise, Jeddah, tide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 436
98 A Framework for Enhancing Mobile Development Software for Rangsit University, Thailand

Authors: Thossaporn Thossansin

Abstract:

This paper presents the development of a mobile application for students at the Faculty of Information Technology, Rangsit University (RSU), Thailand. RSU upgrades an enrollment process by improving its information systems. Students can download the RSU APP easily in order to access the RSU substantial information. The reason of having a mobile application is to help students to access the system regardless of time and place. The objectives of this paper include: 1. To develop an application on iOS platform for those students at the Faculty of Information Technology, Rangsit University, Thailand. 2. To obtain the students’ perception towards the new mobile app. The target group is those from the freshman year till the senior year of the faculty of Information Technology, Rangsit University. The new mobile application, called as RSU APP, is developed by the department of Information Technology, Rangsit University. It contains useful features and various functionalities particularly on those that can give support to students. The core contents of the app consist of RSU’s announcement, calendar, events, activities, and ebook. The mobile app is developed on the iOS platform. The user satisfaction is analyzed from the interview data from 81 interviewees as well as a Google application like a Google form which 122 interviewees are involved. The result shows that users are satisfied with the application as they score it the most satisfaction level at 4.67 SD 0.52. The score for the question if users can learn and use the application quickly is high which is 4.82 SD 0.71. On the other hand, the lowest satisfaction rating is in the app’s form, apps lists, with the satisfaction level as 4.01 SD 0.45.

Keywords: Mobile application, development of mobile application, framework of mobile development, software development for mobile devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1649
97 Diversity for Safety and Security of Autonomous Vehicles against Accidental and Deliberate Faults

Authors: Anil Ranjitbhai Patel, Clement John Shaji, Peter Liggesmeyer

Abstract:

Safety and security of Autonomous Vehicles (AVs) is a growing concern, first, due to the increased number of safety-critical functions taken over by automotive embedded systems; second, due to the increased exposure of the software-intensive systems to potential attackers; third, due to dynamic interaction in an uncertain and unknown environment at runtime which results in changed functional and non-functional properties of the system. Frequently occurring environmental uncertainties, random component failures, and compromise security of the AVs might result in hazardous events, sometimes even in an accident, if left undetected. Beyond these technical issues, we argue that the safety and security of AVs against accidental and deliberate faults are poorly understood and rarely implemented. One possible way to overcome this is through a well-known diversity approach. As an effective approach to increase safety and security, diversity has been widely used in the aviation, railway, and aerospace industries. Thus, paper proposes fault-tolerance by diversity model taking into consideration the mitigation of accidental and deliberate faults by application of structure and variant redundancy. The model can be used to design the AVs with various types of diversity in hardware and software-based multi-version system. The paper evaluates the presented approach by employing an example from adaptive cruise control, followed by discussing the case study with initial findings.

Keywords: Autonomous vehicles, diversity, fault-tolerance, adaptive cruise control, safety, security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 396
96 Effect of Reynolds Number on Wall-normal Turbulence Intensity in a Smooth and Rough Open Channel Using both Outer and Inner Scaling

Authors: Md Abdullah Al Faruque, Ram Balachandar

Abstract:

Sudden change of bed condition is frequent in open channel flow. Change of bed condition affects the turbulence characteristics in both streamwise and wall-normal direction. Understanding the turbulence intensity in open channel flow is of vital importance to the modeling of sediment transport and resuspension, bed formation, entrainment, and the exchange of energy and momentum. A comprehensive study was carried out to understand the extent of the effect of Reynolds number and bed roughness on different turbulence characteristics in an open channel flow. Four different bed conditions (impervious smooth bed, impervious continuous rough bed, pervious rough sand bed, and impervious distributed roughness) and two different Reynolds numbers were adopted for this cause. The effect of bed roughness on different turbulence characteristics is seen to be prevalent for most of the flow depth. Effect of Reynolds number on different turbulence characteristics is also evident for flow over different bed, but the extent varies on bed condition. Although the same sand grain is used to create the different rough bed conditions, the difference in turbulence characteristics is an indication that specific geometry of the roughness has an influence on turbulence characteristics. Roughness increases the contribution of the extreme turbulent events which produces very large instantaneous Reynolds shear stress and can potentially influence the sediment transport, resuspension of pollutant from bed and alter the nutrient composition, which eventually affect the sustainability of benthic organisms.

Keywords: Open channel flow, Reynolds Number, roughness, turbulence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1040
95 Flow Visualization and Characterization of an Artery Model with Stenosis

Authors: Anis S. Shuib, Peter R. Hoskins, William J. Easson

Abstract:

Cardiovascular diseases, principally atherosclerosis, are responsible for 30% of world deaths. Atherosclerosis is due to the formation of plaque. The fatty plaque may be at risk of rupture, leading typically to stroke and heart attack. The plaque is usually associated with a high degree of lumen reduction, called a stenosis.It is increasingly recognized that the initiation and progression of disease and the occurrence of clinical events is a complex interplay between the local biomechanical environment and the local vascular biology. The aim of this study is to investigate the flow behavior through a stenosed artery. A physical experiment was performed using an artery model and blood analogue fluid. An axisymmetric model constructed consists of contraction and expansion region that follow a mathematical form of cosine function. A 30% diameter reduction was used in this study. The flow field was measured using particle image velocimetry (PIV). Spherical particles with 20μm diameter were seeded in a water-glycerol-NaCl mixture. Steady flow Reynolds numbers are 250. The area of interest is the region after the stenosis where the flow separation occurs. The velocity field was measured and the velocity gradient was investigated. There was high particle concentration in the recirculation zone. High velocity gradient formed immediately after the stenosis throat created a lift force that enhanced particle migration to the flow separation area.

Keywords: Stenosis artery, Biofluid mechanics, PIV

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959
94 Unattended Crowdsensing Method to Monitor the Quality Condition of Dirt Roads

Authors: Matías Micheletto, Rodrigo Santos, Sergio F. Ochoa

Abstract:

In developing countries, most roads in rural areas are dirt road. They require frequent maintenance since they are affected by erosive events, such as rain or wind, and the transit of heavy-weight trucks and machinery. Early detection of damages on the road condition is a key aspect, since it allows to reduce the maintenance time and cost, and also the limitations for other vehicles to travel through. Most proposals that help address this problem require the explicit participation of drivers, a permanent internet connection, or important instrumentation in vehicles or roads. These constraints limit the suitability of these proposals when applied into developing regions, like Latin America. This paper proposes an alternative method, based on unattended crowdsensing, to determine the quality of dirt roads in rural areas. This method involves the use of a mobile application that complements the road condition surveys carried out by organizations in charge of the road network maintenance, giving them early warnings about road areas that could be requiring maintenance. Drivers can also take advantage of the early warnings while they move through these roads. The method was evaluated using information from a public dataset. Although they are preliminary, the results indicate the proposal is potentially suitable to provide awareness about dirt roads condition to drivers, transportation authority and road maintenance companies.

Keywords: Dirt roads automatic quality assessment, collaborative system, unattended crowdsensing method, roads quality awareness provision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 454
93 Piping Fragility Composed of Different Materials by Using OpenSees Software

Authors: Woo Young Jung, Min Ho Kwon, Bu Seog Ju

Abstract:

A failure of the non-structural component can cause  significant damages in critical facilities such as nuclear power plants  and hospitals. Historically, it was reported that the damage from the  leakage of sprinkler systems, resulted in the shutdown of hospitals for  several weeks by the 1971 San Fernando and 1994 North Ridge  earthquakes. In most cases, water leakages were observed at the cross  joints, sprinkler heads, and T-joint connections in piping systems  during and after the seismic events. Hence, the primary objective of  this study was to understand the seismic performance of T-joint  connections and to develop an analytical Finite Element (FE) model  for the T-joint systems of 2-inch fire protection piping system in  hospitals subjected to seismic ground motions. In order to evaluate the  FE models of the piping systems using OpenSees, two types of  materials were used: 1) Steel02 materials and 2) Pinching4 materials.  Results of the current study revealed that the nonlinear  moment-rotation FE models for the threaded T-joint reconciled well  with the experimental results in both FE material models. However,  the system-level fragility determined from multiple nonlinear time  history analyses at the threaded T-joint was slightly different. The  system-level fragility at the T-joint, determined by Pinching4 material  was more conservative than that of using Steel02 material in the piping  system.

Keywords: Fragility, T-joint, Piping, Leakage, Sprinkler.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2848
92 Adjustment of a PET Scanner for PEPT

Authors: Alireza Sadrmomtaz

Abstract:

Positron emission particle tracking (PEPT) is a technique in which a single radioactive tracer particle can be accurately tracked as it moves. A limitation of PET is that in order to reconstruct a tomographic image it is necessary to acquire a large volume of data (millions of events), so it is difficult to study rapidly changing systems. By considering this fact, PEPT is a very fast process compared with PET. In PEPT detecting both photons defines a line and the annihilation is assumed to have occurred somewhere along this line. The location of the tracer can be determined to within a few mm from coincident detection of a small number of pairs of back-to-back gamma rays and using triangulation. This can be achieved many times per second and the track of a moving particle can be reliably followed. This technique was invented at the University of Birmingham [1]. The attempt in PEPT is not to form an image of the tracer particle but simply to determine its location with time. If this tracer is followed for a long enough period within a closed, circulating system it explores all possible types of motion. The application of PEPT to industrial process systems carried out at the University of Birmingham is categorized in two subjects: the behaviour of granular materials and viscous fluids. Granular materials are processed in industry for example in the manufacture of pharmaceuticals, ceramics, food, polymers and PEPT has been used in a number of ways to study the behaviour of these systems [2]. PEPT allows the possibility of tracking a single particle within the bed [3]. Also PEPT has been used for studying systems such as: fluid flow, viscous fluids in mixers [4], using a neutrally-buoyant tracer particle [5].

Keywords: PET, BGO, Particle Tracking, ECAT 931, List mode, PEPT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1364
91 Improving Flash Flood Forecasting with a Bayesian Probabilistic Approach: A Case Study on the Posina Basin in Italy

Authors: Zviad Ghadua, Biswa Bhattacharya

Abstract:

The Flash Flood Guidance (FFG) provides the rainfall amount of a given duration necessary to cause flooding. The approach is based on the development of rainfall-runoff curves, which helps us to find out the rainfall amount that would cause flooding. An alternative approach, mostly experimented with Italian Alpine catchments, is based on determining threshold discharges from past events and on finding whether or not an oncoming flood has its magnitude more than some critical discharge thresholds found beforehand. Both approaches suffer from large uncertainties in forecasting flash floods as, due to the simplistic approach followed, the same rainfall amount may or may not cause flooding. This uncertainty leads to the question whether a probabilistic model is preferable over a deterministic one in forecasting flash floods. We propose the use of a Bayesian probabilistic approach in flash flood forecasting. A prior probability of flooding is derived based on historical data. Additional information, such as antecedent moisture condition (AMC) and rainfall amount over any rainfall thresholds are used in computing the likelihood of observing these conditions given a flash flood has occurred. Finally, the posterior probability of flooding is computed using the prior probability and the likelihood. The variation of the computed posterior probability with rainfall amount and AMC presents the suitability of the approach in decision making in an uncertain environment. The methodology has been applied to the Posina basin in Italy. From the promising results obtained, we can conclude that the Bayesian approach in flash flood forecasting provides more realistic forecasting over the FFG.

Keywords: Flash flood, Bayesian, flash flood guidance, FFG, forecasting, Posina.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 682
90 The Analysis of Internet and Social Media Behaviors of the Students in the Higher School of Vocational and Technical Sciences

Authors: Mehmet Balci, Sakir Tasdemir, Mustafa Altin, Ozlem Bozok

Abstract:

Our globalizing world has become almost a small village and everyone can access any information at any time. Everyone lets each other know who does whatever in which place. We can learn which social events occur in which place in the world. From the perspective of education, the course notes that a lecturer use in lessons in a university in any state of America can be examined by a student studying in a city of Africa or the Far East. This dizzying communication we have mentioned happened thanks to fast developments in computer and internet technologies. While these developments occur in the world, Turkey that has a very large young population and whose electronic infrastructure rapidly improves has also been affected by these developments. Nowadays, mobile devices have become common and thus, it causes to increase data traffic in social networks. This study was carried out on students in the different age groups in Selcuk University Vocational School of Technical Sciences, the Department of Computer Technology. Students’ opinions about the use of internet and social media were obtained. The features such as using the Internet and social media skills, purposes, operating frequency, accessing facilities and tools, social life and effects on vocational education and so forth were explored. The positive effects and negative effects of both internet and social media use on the students in this department and findings are evaluated from different perspectives and results are obtained. In addition, relations and differences were found out statistically.

Keywords: Computer technologies, internet use, social network, higher vocational school.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713
89 Performing Diagnosis in Building with Partially Valid Heterogeneous Tests

Authors: Houda Najeh, Mahendra Pratap Singh, Stéphane Ploix, Antoine Caucheteux, Karim Chabir, Mohamed Naceur Abdelkrim

Abstract:

Building system is highly vulnerable to different kinds of faults and human misbehaviors. Energy efficiency and user comfort are directly targeted due to abnormalities in building operation. The available fault diagnosis tools and methodologies particularly rely on rules or pure model-based approaches. It is assumed that model or rule-based test could be applied to any situation without taking into account actual testing contexts. Contextual tests with validity domain could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when validate the test model considering the non-modeled events such as occupancy, weather conditions, door and window openings and the integration of the knowledge of the expert on the state of the system. The concept of heterogeneous tests is combined with test validity to generate fault diagnoses. A combination of rules, range and model-based tests known as heterogeneous tests are proposed to reduce the modeling complexity. Calculation of logical diagnoses coming from artificial intelligence provides a global explanation consistent with the test result. An application example shows the efficiency of the proposed technique: an office setting at Grenoble Institute of Technology.

Keywords: Heterogeneous tests, validity, building system, sensor grids, sensor fault, diagnosis, fault detection and isolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 580