Search results for: dynamic Bayesian networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6556

Search results for: dynamic Bayesian networks

4156 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 84
4155 Dynamic and Thermal Characteristics of Three-Dimensional Turbulent Offset Jet

Authors: Ali Assoudi, Sabra Habli, Nejla Mahjoub Saïd, Philippe Bournot, Georges Le Palec

Abstract:

Studying the flow characteristics of a turbulent offset jet is an important topic among researchers across the world because of its various engineering applications. Some of the common examples include: injection and carburetor systems, entrainment and mixing process in gas turbine and boiler combustion chambers, Thrust-augmenting ejectors for V/STOL aircrafts and HVAC systems, environmental dischargers, film cooling and many others. An offset jet is formed when a jet discharges into a medium above a horizontal solid wall parallel to the axis of the jet exit but which is offset by a certain distance. The structure of a turbulent offset-jet can be described by three main regions. Close to the nozzle exit, an offset jet possesses characteristic features similar to those of free jets. Then, the entrainment of fluid between the jet, the offset wall and the bottom wall creates a low pressure zone, forcing the jet to deflect towards the wall and eventually attaches to it at the impingement point. This is referred to as the Coanda effect. Further downstream after the reattachment point, the offset jet has the characteristics of a wall jet flow. Therefore, the offset jet has characteristics of free, impingement and wall jets, and it is relatively more complex compared to these types of flows. The present study examines the dynamic and thermal evolution of a 3D turbulent offset jet with different offset height ratio (the ratio of the distance from the jet exit to the impingement bottom wall and the jet nozzle diameter). To achieve this purpose a numerical study was conducted to investigate a three-dimensional offset jet flow through the resolution of the different governing Navier–Stokes’ equations by means of the finite volume method and the RSM second-order turbulent closure model. A detailed discussion has been provided on the flow and thermal characteristics in the form of streamlines, mean velocity vector, pressure field and Reynolds stresses.

Keywords: offset jet, offset ratio, numerical simulation, RSM

Procedia PDF Downloads 287
4154 Forecasting Lake Malawi Water Level Fluctuations Using Stochastic Models

Authors: M. Mulumpwa, W. W. L. Jere, M. Lazaro, A. H. N. Mtethiwa

Abstract:

The study considered Seasonal Autoregressive Integrated Moving Average (SARIMA) processes to select an appropriate stochastic model to forecast the monthly data from the Lake Malawi water levels for the period 1986 through 2015. The appropriate model was chosen based on SARIMA (p, d, q) (P, D, Q)S. The Autocorrelation function (ACF), Partial autocorrelation (PACF), Akaike Information Criteria (AIC), Bayesian Information Criterion (BIC), Box–Ljung statistics, correlogram and distribution of residual errors were estimated. The SARIMA (1, 1, 0) (1, 1, 1)12 was selected to forecast the monthly data of the Lake Malawi water levels from August, 2015 to December, 2021. The plotted time series showed that the Lake Malawi water levels are decreasing since 2010 to date but not as much as was the case in 1995 through 1997. The future forecast of the Lake Malawi water levels until 2021 showed a mean of 474.47 m ranging from 473.93 to 475.02 meters with a confidence interval of 80% and 90% against registered mean of 473.398 m in 1997 and 475.475 m in 1989 which was the lowest and highest water levels in the lake respectively since 1986. The forecast also showed that the water levels of Lake Malawi will drop by 0.57 meters as compared to the mean water levels recorded in the previous years. These results suggest that the Lake Malawi water level may not likely go lower than that recorded in 1997. Therefore, utilisation and management of water-related activities and programs among others on the lake should provide room for such scenarios. The findings suggest a need to manage the Lake Malawi jointly and prudently with other stakeholders starting from the catchment area. This will reduce impacts of anthropogenic activities on the lake’s water quality, water level, aquatic and adjacent terrestrial ecosystems thereby ensuring its resilience to climate change impacts.

Keywords: forecasting, Lake Malawi, water levels, water level fluctuation, climate change, anthropogenic activities

Procedia PDF Downloads 208
4153 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading

Authors: Vaso K. Kapnopoulou, Piero Caridis

Abstract:

The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.

Keywords: dynamic load cases, finite element method, high cycle fatigue, lower hopper knuckle

Procedia PDF Downloads 400
4152 Industrial Prototype for Hydrogen Separation and Purification: Graphene Based-Materials Application

Authors: Juan Alfredo Guevara Carrio, Swamy Toolahalli Thipperudra, Riddhi Naik Dharmeshbhai, Sergio Graniero Echeverrigaray, Jose Vitorio Emiliano, Antonio Helio Castro

Abstract:

In order to advance the hydrogen economy, several industrial sectors can potentially benefit from the trillions of stimulus spending for post-coronavirus. Blending hydrogen into natural gas pipeline networks has been proposed as a means of delivering it during the early market development phase, using separation and purification technologies downstream to extract the pure H₂ close to the point of end-use. This first step has been mentioned around the world as an opportunity to use existing infrastructures for immediate decarbonisation pathways. Among current technologies used to extract hydrogen from mixtures in pipelines or liquid carriers, membrane separation can achieve the highest selectivity. The most efficient approach for the separation of H₂ from other substances by membranes is offered from the research of 2D layered materials due to their exceptional physical and chemical properties. Graphene-based membranes, with their distribution of pore sizes in nanometers and angstrom range, have shown fundamental and economic advantages over other materials. Their combination with the structure of ceramic and geopolymeric materials enabled the synthesis of nanocomposites and the fabrication of membranes with long-term stability and robustness in a relevant range of physical and chemical conditions. Versatile separation modules have been developed for hydrogen separation, which adaptability allows their integration in industrial prototypes for applications in heavy transport, steel, and cement production, as well as small installations at end-user stations of pipeline networks. The developed membranes and prototypes are a practical contribution to the technological challenge of supply pure H₂ for the mentioned industries as well as hydrogen energy-based fuel cells.

Keywords: graphene nano-composite membranes, hydrogen separation and purification, separation modules, indsutrial prototype

Procedia PDF Downloads 143
4151 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 507
4150 Teaching Translation in Brazilian Universities: A Study about the Possible Impacts of Translators’ Comments on the Cyberspace about Translator Education

Authors: Erica Lima

Abstract:

The objective of this paper is to discuss relevant points about teaching translation in Brazilian universities and the possible impacts of blogs and social networks to translator education today. It is intended to analyze the curricula of Brazilian translation courses, contrasting them to information obtained from two social networking groups of great visibility in the area concerning essential characteristics to become a successful profession. Therefore, research has, as its main corpus, a few undergraduate translation programs’ syllabuses, as well as a few postings on social networks groups that specifically share professional opinions regarding the necessity for a translator to obtain a degree in translation to practice the profession. To a certain extent, such comments and their corresponding responses lead to the propagation of discourses which influence the ideas that aspiring translators and recent graduates end up having towards themselves and their undergraduate courses. The postings also show that many professionals do not have a clear position regarding the translator education; while refuting it, they also encourage “free” courses. It is thus observed that cyberspace constitutes, on the one hand, a place of mobilization of people in defense of similar ideas. However, on the other hand, it embodies a place of tension and conflict, in view of the fact that there are many participants and, as in any other situation of interlocution, disagreements may arise. From the postings, aspects related to professionalism were analyzed (including discussions about regulation), as well as questions about the classic dichotomies: theory/practice; art/technique; self-education/academic training. As partial result, the common interest regarding the valorization of the profession could be mentioned, although there is no consensus on the essential characteristics to be a good translator. It was also possible to observe that the set of socially constructed representations in the group reflects characteristics of the world situation of the translation courses (especially in some European countries and in the United States), which, in the first instance, does not accurately reflect the Brazilian idiosyncrasies of the area.

Keywords: cyberspace, teaching translation, translator education, university

Procedia PDF Downloads 366
4149 Control and Automation of Sensors in Metering System of Fluid

Authors: Abdelkader Harrouz, Omar Harrouz, Ali Benatiallah

Abstract:

This paper is to present the essential definitions, roles and characteristics of automation of metering system. We discuss measurement, data acquisition and metrological control of a signal sensor from dynamic metering system. After that, we present control of instruments of metering system of fluid with more detailed discussions to the reference standards.

Keywords: communication, metering, computer, sensor

Procedia PDF Downloads 535
4148 Enzyme Treatment of Sorghum Dough: Modifications of Rheological Properties and Product Characteristics

Authors: G. K. Sruthi, Sila Bhattacharya

Abstract:

Sorghum is an important food crop in the dry tropical areas of the world, and possesses significant levels of phytochemicals and dietary fiber to offer health benefits. However, the absence of gluten is a limitation for converting the sorghum dough into sheeted/flattened/rolled products. Chapathi/roti (flat unleavened bread prepared conventionally from whole wheat flour dough) was attempted from sorghum as wheat gluten causes allergic reactions leading to celiac disease. Dynamic oscillatory rheology of sorghum flour dough (control sample) and enzyme treated sorghum doughs were studied and linked to the attributes of the finished ready-to-eat product. Enzymes like amylase, xylanase, and a mix of amylase and xylanase treated dough affected drastically the rheological behaviour causing a lowering of dough consistency. In the case of amylase treated dough, marked decrease of the storage modulus (G') values from 85513 Pa to 23041 Pa and loss modulus (G") values from 8304 Pa to 7370 Pa was noticed while the phase angle (δ) increased from 5.6 to 10.1o for treated doughs. There was a 2 and 3 fold increase in the total sugar content after α-amylase and xylanase treatment, respectively, with simultaneous changes in the structure of the dough and finished product. Scanning electron microscopy exhibited enhanced extent of changes in starch granules. Amylase and mixed enzyme treatment produced a sticky dough which was difficult to roll/flatten. The dough handling properties were improved by the use of xylanase and quality attributes of the chapath/roti. It is concluded that enzyme treatment can offer improved rheological status of gluten free doughs and products.

Keywords: sorghum dough, amylase, xylanase, dynamic oscillatory rheology, sensory assessment

Procedia PDF Downloads 380
4147 Impact of PV Distributed Generation on Loop Distribution Network at Saudi Electricity Company Substation in Riyadh City

Authors: Mohammed Alruwaili‬

Abstract:

Nowadays, renewable energy resources are playing an important role in replacing traditional energy resources such as fossil fuels by integrating solar energy with conventional energy. Concerns about the environment led to an intensive search for a renewable energy source. The Rapid growth of distributed energy resources will have prompted increasing interest in the integrated distributing network in the Kingdom of Saudi Arabia next few years, especially after the adoption of new laws and regulations in this regard. Photovoltaic energy is one of the promising renewable energy sources that has grown rapidly worldwide in the past few years and can be used to produce electrical energy through the photovoltaic process. The main objective of the research is to study the impact of PV in distribution networks based on real data and details. In this research, site survey and computer simulation will be dealt with using the well-known computer program software ETAB to simulate the input of electrical distribution lines with other variable inputs such as the levels of solar radiation and the field study that represent the prevailing conditions and conditions in Diriah, Riyadh region, Saudi Arabia. In addition, the impact of adding distributed generation units (DGs) to the distribution network, including solar photovoltaic (PV), will be studied and assessed for the impact of adding different power capacities. The result has been achieved with less power loss in the loop distribution network from the current condition by more than 69% increase in network power loss. However, the studied network contains 78 buses. It is hoped from this research that the efficiency, performance, quality and reliability by having an enhancement in power loss and voltage profile of the distribution networks in Riyadh City. Simulation results prove that the applied method can illustrate the positive impact of PV in loop distribution generation.

Keywords: renewable energy, smart grid, efficiency, distribution network

Procedia PDF Downloads 119
4146 The Right of Taiwanese Individuals with Mental Illnesses to Participate in Medical Decision-Making

Authors: Ying-Lun Tseng Chiu-Ying Chen

Abstract:

Taiwan's Mental Health Act was amended at the end of 2022; they added regulations regarding refusing compulsory treatment by patients with mental illnesses. In addition, not only by an examination committee, the judge must also assess the patient's need for compulsory treatment. Additionally, the maximum of compulsory hospitalization has been reduced from an unlimited period to a maximum of 60 days. They aim to promote the healthcare autonomy of individuals with mental illnesses in Taiwan and prevent their silenced voice in medical decision-making while they still possess rationality. Furthermore, they plan to use community support and social care networks to replace the current practice of compulsory treatment in Taiwan. This study uses qualitative research methodology, utilizing interview guidelines to inquire about the experiences of Taiwanese who have undergone compulsory hospitalization, compulsory community treatment, and compulsory medical care. The interviews aimed to explore their feelings when they were subjected to compulsory medical intervention, the inside of their illness, their opinions after treatments, and whether alternative medical interventions proposed by them were considered. Additionally, participants also asked about their personal life history and their support networks in their lives. We collected 12 Taiwanese who had experienced compulsory medical interventions and were interviewed 14 times. The findings indicated that participants still possessed rationality during the onset of their illness. However, when they have other treatments to replace compulsory medical, they sometimes diverge from those of the doctors and their families. Finally, doctors prefer their professional judgment and patients' families' option. Therefore, Taiwanese mental health patients' power of decision-making still needs to improve. Because this research uses qualitative research, so difficult to find participants, and the sample size rate was smaller than Taiwan's population, it may have biases in the analysis. So, Taiwan still has significant progress in enhancing the decision-making rights of participants in the study.

Keywords: medical decision making, compulsory treatment, medical ethics, mental health act

Procedia PDF Downloads 56
4145 Investigation of the Kutta Condition Using Unsteady Flow

Authors: K. Bhojnadh, M. Fiddler, D. Cheshire

Abstract:

An investigation into the Kutta effect on the trailing edge of a subsonic aerofoil was conducted which led to an analysis using Ansys Fluent to determine the effect of flow separation over a NACA 0012 aerofoil. This aerofoil was subjected to oscillations to create an unsteady flow over the aerofoil, therefore, creating turbulence, with unsteady aerodynamics playing a key role to determine the flow regimes when the aerofoil is subjected to different angles of attack along with varying Reynolds numbers. Many theories were evolved to determine the flow parameters of a 2-D aerofoil in these unsteady conditions because they behave unpredictably at the trailing edge when subjected to a different angle of attack. The shear area observed in the boundary layer at the trailing edge tends towards an unsteady turbulent flow even at small angles of attack, creating drag as the flow separates, reducing the aerodynamic performance of aerofoil. In this paper, research was conducted to determine the effect of Kutta circulation over the aerofoil and the effect of that circulation in reducing the effect of pressure and boundary layer distribution over the aerofoil. The effect of circulation is observed by using Ansys Fluent by using varying flow parameters and differential schemes to observe the flow behaviour on the aerofoil. Initially, steady flow analysis was conducted on the aerofoil to determine the effect of circulation, and it was noticed that the effect of circulation could only be properly observed when the aerofoil is subjected to oscillations. Therefore, that was modelled by using Ansys user-defined functions, which define the motion of the aerofoil by creating a dynamic mesh on the aerofoil. Initial results were observed, and further development of the dynamic mesh functions in Ansys is taking place. This research will determine the overall basic principles of unsteady flow aerodynamics applied to the investigation of Kutta related circulation, and gives an indication regarding the generation of vortices which is discussed further in this paper.

Keywords: circulation, flow seperation, turbulence modelling, vortices

Procedia PDF Downloads 185
4144 Optimal Trajectory Finding of IDP Ventilation Control with Outdoor Air Information and Indoor Health Risk Index

Authors: Minjeong Kim, Seungchul Lee, Iman Janghorban Esfahani, Jeong Tai Kim, ChangKyoo Yoo

Abstract:

A trajectory of set-point of ventilation control systems plays an important role for efficient ventilation inside subway stations since it affects the level of indoor air pollutants and ventilation energy consumption. To maintain indoor air quality (IAQ) at a comfortable range with lower ventilation energy consumption, the optimal trajectory of the ventilation control system needs to be determined. The concentration of air pollutants inside the station shows a diurnal variation in accordance with the variations in the number of passengers and subway frequency. To consider the diurnal variation of IAQ, an iterative dynamic programming (IDP) that searches for a piecewise control policy by separating whole duration into several stages is used. When outdoor air is contaminated by pollutants, it enters the subway station through the ventilation system, which results in the deteriorated IAQ and adverse effects on passenger health. In this study, to consider the influence of outdoor air quality (OAQ), a new performance index of the IDP with the passenger health risk and OAQ is proposed. This study was carried out for an underground subway station at Seoul Metro, Korea. The optimal set-points of the ventilation control system are determined every 3 hours, then, the ventilation controller adjusts the ventilation fan speed according to the optimal set-point changes. Compared to manual ventilation system which is operated irrespective of the OAQ, the IDP-based ventilation control system saves 3.7% of the energy consumption. Compared to the fixed set-point controller which is operated irrespective of the IAQ diurnal variation, the IDP-based controller shows better performance with a 2% decrease in energy consumption, maintaining the comfortable IAQ range inside the station.

Keywords: indoor air quality, iterative dynamic algorithm, outdoor air information, ventilation control system

Procedia PDF Downloads 487
4143 Assessment of Image Databases Used for Human Skin Detection Methods

Authors: Saleh Alshehri

Abstract:

Human skin detection is a vital step in many applications. Some of the applications are critical especially those related to security. This leverages the importance of a high-performance detection algorithm. To validate the accuracy of the algorithm, image databases are usually used. However, the suitability of these image databases is still questionable. It is suggested that the suitability can be measured mainly by the span the database covers of the color space. This research investigates the validity of three famous image databases.

Keywords: image databases, image processing, pattern recognition, neural networks

Procedia PDF Downloads 244
4142 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 111
4141 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: linked open data, information integration, digital libraries, data mining

Procedia PDF Downloads 403
4140 Dynamic Thin Film Morphology near the Contact Line of a Condensing Droplet: Nanoscale Resolution

Authors: Abbasali Abouei Mehrizi, Hao Wang

Abstract:

The thin film region is so important in heat transfer process due to its low thermal resistance. On the other hand, the dynamic contact angle is crucial boundary condition in numerical simulations. While different modeling contains different assumption of the microscopic contact angle, none of them has experimental evidence for their assumption, and the contact line movement mechanism still remains vague. The experimental investigation in complete wetting is more popular than partial wetting, especially in nanoscale resolution when there is sharp variation in thin film profile in partial wetting. In the present study, an experimental investigation of water film morphology near the triple phase contact line during the condensation is performed. The state-of-the-art tapping-mode atomic force microscopy (TM-AFM) was used to get the high-resolution film profile goes down to 2 nm from the contact line. The droplet was put in saturated chamber. The pristine silicon wafer was used as a smooth substrate. The substrate was heated by PI film heater. So the chamber would be over saturated by droplet evaporation. By turning off the heater, water vapor gradually started condensing on the droplet and the droplet advanced. The advancing speed was less than 20 nm/s. The dominant results indicate that in contrast to nonvolatile liquid, the film profile goes down straightly to the surface till 2 nm from the substrate. However, small bending has been observed below 20 nm, occasionally. So, it can be claimed that for the low condensation rate the microscopic contact angle equals to the optically detectable macroscopic contact angle. This result can be used to simplify the heat transfer modeling in partial wetting. The experimental result of the equality of microscopic and macroscopic contact angle can be used as a solid evidence for using this boundary condition in numerical simulation.

Keywords: advancing, condensation, microscopic contact angle, partial wetting

Procedia PDF Downloads 277
4139 Analyzing Impacts of Road Network on Vegetation Using Geographic Information System and Remote Sensing Techniques

Authors: Elizabeth Malebogo Mosepele

Abstract:

Road transport has become increasingly common in the world; people rely on road networks for transportation purpose on a daily basis. However, environmental impact of roads on surrounding landscapes extends their potential effects even further. This study investigates the impact of road network on natural vegetation. The study will provide baseline knowledge regarding roadside vegetation and would be helpful in future for conservation of biodiversity along the road verges and improvements of road verges. The general hypothesis of this study is that the amount and condition of road side vegetation could be explained by road network conditions. Remote sensing techniques were used to analyze vegetation conditions. Landsat 8 OLI image was used to assess vegetation cover condition. NDVI image was generated and used as a base from which land cover classes were extracted, comprising four categories viz. healthy vegetation, degraded vegetation, bare surface, and water. The classification of the image was achieved using the supervised classification technique. Road networks were digitized from Google Earth. For observed data, transect based quadrats of 50*50 m were conducted next to road segments for vegetation assessment. Vegetation condition was related to road network, with the multinomial logistic regression confirming a significant relationship between vegetation condition and road network. The null hypothesis formulated was that 'there is no variation in vegetation condition as we move away from the road.' Analysis of vegetation condition revealed degraded vegetation within close proximity of a road segment and healthy vegetation as the distance increase away from the road. The Chi Squared value was compared with critical value of 3.84, at the significance level of 0.05 to determine the significance of relationship. Given that the Chi squared value was 395, 5004, the null hypothesis was therefore rejected; there is significant variation in vegetation the distance increases away from the road. The conclusion is that the road network plays an important role in the condition of vegetation.

Keywords: Chi squared, geographic information system, multinomial logistic regression, remote sensing, road side vegetation

Procedia PDF Downloads 412
4138 American Sign Language Recognition System

Authors: Rishabh Nagpal, Riya Uchagaonkar, Venkata Naga Narasimha Ashish Mernedi, Ahmed Hambaba

Abstract:

The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies.

Keywords: sign language, computer vision, vision transformer, VGG16, CNN

Procedia PDF Downloads 17
4137 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 123
4136 Resale Housing Development Board Price Prediction Considering Covid-19 through Sentiment Analysis

Authors: Srinaath Anbu Durai, Wang Zhaoxia

Abstract:

Twitter sentiment has been used as a predictor to predict price values or trends in both the stock market and housing market. The pioneering works in this stream of research drew upon works in behavioural economics to show that sentiment or emotions impact economic decisions. Latest works in this stream focus on the algorithm used as opposed to the data used. A literature review of works in this stream through the lens of data used shows that there is a paucity of work that considers the impact of sentiments caused due to an external factor on either the stock or the housing market. This is despite an abundance of works in behavioural economics that show that sentiment or emotions caused due to an external factor impact economic decisions. To address this gap, this research studies the impact of Twitter sentiment pertaining to the Covid-19 pandemic on resale Housing Development Board (HDB) apartment prices in Singapore. It leverages SNSCRAPE to collect tweets pertaining to Covid-19 for sentiment analysis, lexicon based tools VADER and TextBlob are used for sentiment analysis, Granger Causality is used to examine the relationship between Covid-19 cases and the sentiment score, and neural networks are leveraged as prediction models. Twitter sentiment pertaining to Covid-19 as a predictor of HDB price in Singapore is studied in comparison with the traditional predictors of housing prices i.e., the structural and neighbourhood characteristics. The results indicate that using Twitter sentiment pertaining to Covid19 leads to better prediction than using only the traditional predictors and performs better as a predictor compared to two of the traditional predictors. Hence, Twitter sentiment pertaining to an external factor should be considered as important as traditional predictors. This paper demonstrates the real world economic applications of sentiment analysis of Twitter data.

Keywords: sentiment analysis, Covid-19, housing price prediction, tweets, social media, Singapore HDB, behavioral economics, neural networks

Procedia PDF Downloads 90
4135 Efficient Backup Protection for Hybrid WDM/TDM GPON System

Authors: Elmahdi Mohammadine, Ahouzi Esmail, Najid Abdellah

Abstract:

This contribution aims to present a new protected hybrid WDM/TDM PON architecture using Wavelength Selective Switches and Optical Line Protection devices. The objective from using these technologies is to improve flexibility and enhance the protection of GPON networks.

Keywords: Wavlenght Division Multiplexed Passive Optical Network (WDM-PON), Time Division Multiplexed PON (TDM-PON), architecture, Protection, Wavelength Selective Switches (WSS), Optical Line Protection (OLP)

Procedia PDF Downloads 519
4134 Protection Plan of Medium Voltage Distribution Network in Tunisia

Authors: S. Chebbi, A. Meddeb

Abstract:

The distribution networks are often exposed to harmful incidents which can halt the electricity supply of the customer. In this context, we studied a real case of a critical zone of the Tunisian network which is currently characterized by the dysfunction of its plan of protection. In this paper, we were interested in the harmonization of the protection plan settings in order to ensure a perfect selectivity and a better continuity of service on the whole of the network.

Keywords: distribution network Gabes-Tunisia, continuity of service, protection plan settings, selectivity

Procedia PDF Downloads 496
4133 Alternative Method of Determining Seismic Loads on Buildings Without Response Spectrum Application

Authors: Razmik Atabekyan, V. Atabekyan

Abstract:

This article discusses a new alternative method for determination of seismic loads on buildings, based on resistance of structures to deformations of vibrations. The basic principles for determining seismic loads by spectral method were developed in 40… 50ies of the last century and further have been improved to pursuit true assessments of seismic effects. The base of the existing methods to determine seismic loads is response spectrum or dynamicity coefficient β (norms of RF), which are not definitively established. To this day there is no single, universal method for the determination of seismic loads and when trying to apply the norms of different countries, significant discrepancies between the results are obtained. On the other hand there is a contradiction of the results of macro seismic surveys of strong earthquakes with the principle of the calculation based on accelerations. It is well-known, on soft soils there is an increase of destructions (mainly due to large displacements), even though the accelerations decreases. Obviously, the seismic impacts are transmitted to the building through foundation, but paradoxically, the existing methods do not even include foundation data. Meanwhile acceleration of foundation of the building can differ several times from the acceleration of the ground. During earthquakes each building has its own peculiarities of behavior, depending on the interaction between the soil and the foundations, their dynamic characteristics and many other factors. In this paper we consider a new, alternative method of determining the seismic loads on buildings, without the use of response spectrum. The following main conclusions: 1) Seismic loads are revealed at the foundation level, which leads to redistribution and reduction of seismic loads on structures. 2) The proposed method is universal and allows determine the seismic loads without the use of response spectrum and any implicit coefficients. 3) The possibility of taking into account important factors such as the strength characteristics of the soils, the size of the foundation, the angle of incidence of the seismic ray and others. 4) Existing methods can adequately determine the seismic loads on buildings only for first form of vibrations, at an average soil conditions.

Keywords: seismic loads, response spectrum, dynamic characteristics of buildings, momentum

Procedia PDF Downloads 482
4132 Older Consumer’s Willingness to Trust Social Media Advertising: An Australian Case

Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant

Abstract:

Social media networks have become the hotbed for advertising activities, due mainly to their increasing consumer/user base, and secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel-specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. The purpose of this exploratory paper is to investigate the extent to which social media users trust social media advertising. Understanding this relationship will fundamentally assist marketers in better understanding social media interactions and their implications for society. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional different media, such as broadcast media and print media, and more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilised as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: Gen Z/Millennials Reliability = 4.90/7 vs Gen X/Boomers Reliability = 4.34/7; Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads, when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioural intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users, in an attempt to foster positive behavioural responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.

Keywords: social media advertising, trust, older consumers, online

Procedia PDF Downloads 68
4131 Rheological Study of Natural Sediments: Application in Filling of Estuaries

Authors: S. Serhal, Y. Melinge, D. Rangeard, F. Hage Chehadeh

Abstract:

Filling of estuaries is an international problem that can cause economic and environmental damage. This work aims the study of the rheological structuring mechanisms of natural sedimentary liquid-solid mixture in estuaries in order to better understand their filling. The estuary of the Rance river, located in Brittany, France is particularly targeted by the study. The aim is to provide answers on the rheological behavior of natural sediments by detecting structural factors influencing the rheological parameters. So we can better understand the fillings estuarine areas and especially consider sustainable solutions of ‘cleansing’ of these areas. The sediments were collected from the trap of Lyvet in Rance estuary. This trap was created by the association COEUR (Comité Opérationnel des Elus et Usagers de la Rance) in 1996 in order to facilitate the cleansing of the estuary. It creates a privileged area for the deposition of sediments and consequently makes the cleansing of the estuary easier. We began our work with a preliminary study to establish the trend of the rheological behavior of the suspensions and to specify the dormant phase which precedes the beginning of the biochemical reactivity of the suspensions. Then we highlight the visco-plastic character at younger age using the Kinexus rheometer, plate-plate geometry. This rheological behavior of suspensions is represented by the Bingham model using dynamic yield stress and viscosity which can be a function of volume fraction, granular extent, and chemical reactivity. The evolution of the viscosity as a function of the solid volume fraction is modeled by the Krieger-Dougherty model. On the other hand, the analysis of the dynamic yield stress showed a fairly functional link with the solid volume fraction.

Keywords: estuaries, rheological behavior, sediments, Kinexus rheometer, Bingham model, viscosity, yield stress

Procedia PDF Downloads 140
4130 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education

Authors: Gleyce Assis Da Silva Barbosa

Abstract:

In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.

Keywords: financialization, oligopolies, education, Brazil

Procedia PDF Downloads 43
4129 Finite Element Modeling and Nonlinear Analysis for Seismic Assessment of Off-Diagonal Steel Braced RC Frame

Authors: Keyvan Ramin

Abstract:

The geometric nonlinearity of Off-Diagonal Bracing System (ODBS) could be a complementary system to covering and extending the nonlinearity of reinforced concrete material. Finite element modeling is performed for flexural frame, x-braced frame and the ODBS braced frame system at the initial phase. Then the different models are investigated along various analyses. According to the experimental results of flexural and x-braced frame, the verification is done. Analytical assessments are performed in according to three-dimensional finite element modeling. Non-linear static analysis is considered to obtain performance level and seismic behavior, and then the response modification factors calculated from each model’s pushover curve. In the next phase, the evaluation of cracks observed in the finite element models, especially for RC members of all three systems is performed. The finite element assessment is performed on engendered cracks in ODBS braced frame for various time steps. The nonlinear dynamic time history analysis accomplished in different stories models for three records of Elcentro, Naghan, and Tabas earthquake accelerograms. Dynamic analysis is performed after scaling accelerogram on each type of flexural frame, x-braced frame and ODBS braced frame one by one. The base-point on RC frame is considered to investigate proportional displacement under each record. Hysteresis curves are assessed along continuing this study. The equivalent viscous damping for ODBS system is estimated in according to references. Results in each section show the ODBS system has an acceptable seismic behavior and their conclusions have been converged when the ODBS system is utilized in reinforced concrete frame.

Keywords: FEM, seismic behaviour, pushover analysis, geometric nonlinearity, time history analysis, equivalent viscous damping, passive control, crack investigation, hysteresis curve

Procedia PDF Downloads 364
4128 Research and Implementation of Cross-domain Data Sharing System in Net-centric Environment

Authors: Xiaoqing Wang, Jianjian Zong, Li Li, Yanxing Zheng, Jinrong Tong, Mao Zhan

Abstract:

With the rapid development of network and communication technology, a great deal of data has been generated in different domains of a network. These data show a trend of increasing scale and more complex structure. Therefore, an effective and flexible cross-domain data-sharing system is needed. The Cross-domain Data Sharing System(CDSS) in a net-centric environment is composed of three sub-systems. The data distribution sub-system provides data exchange service through publish-subscribe technology that supports asynchronism and multi-to-multi communication, which adapts to the needs of the dynamic and large-scale distributed computing environment. The access control sub-system adopts Attribute-Based Access Control(ABAC) technology to uniformly model various data attributes such as subject, object, permission and environment, which effectively monitors the activities of users accessing resources and ensures that legitimate users get effective access control rights within a legal time. The cross-domain access security negotiation subsystem automatically determines the access rights between different security domains in the process of interactive disclosure of digital certificates and access control policies through trust policy management and negotiation algorithms, which provides an effective means for cross-domain trust relationship establishment and access control in a distributed environment. The CDSS’s asynchronous,multi-to-multi and loosely-coupled communication features can adapt well to data exchange and sharing in dynamic, distributed and large-scale network environments. Next, we will give CDSS new features to support the mobile computing environment.

Keywords: data sharing, cross-domain, data exchange, publish-subscribe

Procedia PDF Downloads 109
4127 Fragility Analysis of a Soft First-Story Building in Mexico City

Authors: Rene Jimenez, Sonia E. Ruiz, Miguel A. Orellana

Abstract:

On 09/19/2017, a Mw = 7.1 intraslab earthquake occurred in Mexico causing the collapse of about 40 buildings. Many of these were 5- or 6-story buildings with soft first story; so, it is desirable to perform a structural fragility analysis of typical structures representative of those buildings and to propose a reliable structural solution. Here, a typical 5-story building constituted by regular R/C moment-resisting frames in the first story and confined masonry walls in the upper levels, similar to the collapsed structures on the 09/19/2017 Mexico earthquake, is analyzed. Three different structural solutions of the 5-story building are considered: S1) it is designed in accordance with the Mexico City Building Code-2004; S2) then, the column dimensions of the first story corresponding to S1 are reduced, and S3) viscous dampers are added at the first story of solution S2. A number of dynamic incremental analyses are performed for each structural solution, using a 3D structural model. The hysteretic behavior model of the masonry was calibrated with experiments performed at the Laboratory of Structures at UNAM. Ten seismic ground motions are used to excite the structures; they correspond to ground motions recorded in intermediate soil of Mexico City with a dominant period around 1s, where the structures are located. The fragility curves of the buildings are obtained for different values of the maximum inter-story drift demands. Results show that solutions S1 and S3 give place to similar probabilities of exceedance of a given value of inter-story drift for the same seismic intensity, and that solution S2 presents a higher probability of exceedance for the same seismic intensity and inter-story drift demand. Therefore, it is concluded that solution S3 (which corresponds to the building with soft first story and energy dissipation devices) can be a reliable solution from the structural point of view.

Keywords: demand hazard analysis, fragility curves, incremental dynamic analyzes, soft-first story, structural capacity

Procedia PDF Downloads 160