Search results for: exponential differencing scheme (EDS)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1751

Search results for: exponential differencing scheme (EDS)

431 Computational Fluid Dynamic Modeling of Mixing Enhancement by Stimulation of Ferrofluid under Magnetic Field

Authors: Neda Azimi, Masoud Rahimi, Faezeh Mohammadi

Abstract:

Computational fluid dynamics (CFD) simulation was performed to investigate the effect of ferrofluid stimulation on hydrodynamic and mass transfer characteristics of two immiscible liquid phases in a Y-micromixer. The main purpose of this work was to develop a numerical model that is able to simulate hydrodynamic of the ferrofluid flow under magnetic field and determine its effect on mass transfer characteristics. A uniform external magnetic field was applied perpendicular to the flow direction. The volume of fluid (VOF) approach was used for simulating the multiphase flow of ferrofluid and two-immiscible liquid flows. The geometric reconstruction scheme (Geo-Reconstruct) based on piecewise linear interpolation (PLIC) was used for reconstruction of the interface in the VOF approach. The mass transfer rate was defined via an equation as a function of mass concentration gradient of the transported species and added into the phase interaction panel using the user-defined function (UDF). The magnetic field was solved numerically by Fluent MHD module based on solving the magnetic induction equation method. CFD results were validated by experimental data and good agreements have been achieved, which maximum relative error for extraction efficiency was about 7.52 %. It was showed that ferrofluid actuation by a magnetic field can be considered as an efficient mixing agent for liquid-liquid two-phase mass transfer in microdevices.

Keywords: CFD modeling, hydrodynamic, micromixer, ferrofluid, mixing

Procedia PDF Downloads 196
430 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 489
429 Water Diffusivity in Amorphous Epoxy Resins: An Autonomous Basin Climbing-Based Simulation Method

Authors: Betim Bahtiri, B. Arash, R. Rolfes

Abstract:

Epoxy-based materials are frequently exposed to high-humidity environments in many engineering applications. As a result, their material properties would be degraded by water absorption. A full characterization of the material properties under hygrothermal conditions requires time- and cost-consuming experimental tests. To gain insights into the physics of diffusion mechanisms, atomistic simulations have been shown to be effective tools. Concerning the diffusion of water in polymers, spatial trajectories of water molecules are obtained from molecular dynamics (MD) simulations allowing the interpretation of diffusion pathways at the nanoscale in a polymer network. Conventional MD simulations of water diffusion in amorphous polymers lead to discrepancies at low temperatures due to the short timescales of the simulations. In the proposed model, this issue is solved by using a combined scheme of autonomous basin climbing (ABC) with kinetic Monte Carlo and reactive MD simulations to investigate the diffusivity of water molecules in epoxy resins across a wide range of temperatures. It is shown that the proposed simulation framework estimates kinetic properties of water diffusion in epoxy resins that are consistent with experimental observations and provide a predictive tool for investigating the diffusion of small molecules in other amorphous polymers.

Keywords: epoxy resins, water diffusion, autonomous basin climbing, kinetic Monte Carlo, reactive molecular dynamics

Procedia PDF Downloads 67
428 Performance Comparison of Resource Allocation without Feedback in Wireless Body Area Networks by Various Pseudo Orthogonal Sequences

Authors: Ojin Kwon, Yong-Jin Yoon, Liu Xin, Zhang Hongbao

Abstract:

Wireless Body Area Network (WBAN) is a short-range wireless communication around human body for various applications such as wearable devices, entertainment, military, and especially medical devices. WBAN attracts the attention of continuous health monitoring system including diagnostic procedure, early detection of abnormal conditions, and prevention of emergency situations. Compared to cellular network, WBAN system is more difficult to control inter- and inner-cell interference due to the limited power, limited calculation capability, mobility of patient, and non-cooperation among WBANs. In this paper, we compare the performance of resource allocation scheme based on several Pseudo Orthogonal Codewords (POCs) to mitigate inter-WBAN interference. Previously, the POCs are widely exploited for a protocol sequence and optical orthogonal code. Each POCs have different properties of auto- and cross-correlation and spectral efficiency according to its construction of POCs. To identify different WBANs, several different pseudo orthogonal patterns based on POCs exploits for resource allocation of WBANs. By simulating these pseudo orthogonal resource allocations of WBANs on MATLAB, we obtain the performance of WBANs according to different POCs and can analyze and evaluate the suitability of POCs for the resource allocation in the WBANs system.

Keywords: wireless body area network, body sensor network, resource allocation without feedback, interference mitigation, pseudo orthogonal pattern

Procedia PDF Downloads 353
427 An Efficient Traceability Mechanism in the Audited Cloud Data Storage

Authors: Ramya P, Lino Abraham Varghese, S. Bose

Abstract:

By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.

Keywords: data integrity, dynamic group, group signature, public auditing

Procedia PDF Downloads 392
426 Smart Water Main Inspection and Condition Assessment Using a Systematic Approach for Pipes Selection

Authors: Reza Moslemi, Sebastien Perrier

Abstract:

Water infrastructure deterioration can result in increased operational costs owing to increased repair needs and non-revenue water and consequently cause a reduced level of service and customer service satisfaction. Various water main condition assessment technologies have been introduced to the market in order to evaluate the level of pipe deterioration and to develop appropriate asset management and pipe renewal plans. One of the challenges for any condition assessment and inspection program is to determine the percentage of the water network and the combination of pipe segments to be inspected in order to obtain a meaningful representation of the status of the entire water network with a desirable level of accuracy. Traditionally, condition assessment has been conducted by selecting pipes based on age or location. However, this may not necessarily offer the best approach, and it is believed that by using a smart sampling methodology, a better and more reliable estimate of the condition of a water network can be achieved. This research investigates three different sampling methodologies, including random, stratified, and systematic. It is demonstrated that selecting pipes based on the proposed clustering and sampling scheme can considerably improve the ability of the inspected subset to represent the condition of a wider network. With a smart sampling methodology, a smaller data sample can provide the same insight as a larger sample. This methodology offers increased efficiency and cost savings for condition assessment processes and projects.

Keywords: condition assessment, pipe degradation, sampling, water main

Procedia PDF Downloads 150
425 Technical, Environmental and Financial Assessment for Optimal Sizing of Run-of-River Small Hydropower Project: Case Study in Colombia

Authors: David Calderon Villegas, Thomas Kaltizky

Abstract:

Run-of-river (RoR) hydropower projects represent a viable, clean, and cost-effective alternative to dam-based plants and provide decentralized power production. However, RoR schemes cost-effectiveness depends on the proper selection of site and design flow, which is a challenging task because it requires multivariate analysis. In this respect, this study presents the development of an investment decision support tool for assessing the optimal size of an RoR scheme considering the technical, environmental, and cost constraints. The net present value (NPV) from a project perspective is used as an objective function for supporting the investment decision. The tool has been tested by applying it to an actual RoR project recently proposed in Colombia. The obtained results show that the optimum point in financial terms does not match the flow that maximizes energy generation from exploiting the river's available flow. For the case study, the flow that maximizes energy corresponds to a value of 5.1 m3/s. In comparison, an amount of 2.1 m3/s maximizes the investors NPV. Finally, a sensitivity analysis is performed to determine the NPV as a function of the debt rate changes and the electricity prices and the CapEx. Even for the worst-case scenario, the optimal size represents a positive business case with an NPV of 2.2 USD million and an IRR 1.5 times higher than the discount rate.

Keywords: small hydropower, renewable energy, RoR schemes, optimal sizing, objective function

Procedia PDF Downloads 132
424 Experimental and Theoretical Investigation of Slow Reversible Deformation of Concrete in Surface-Active Media

Authors: Nika Botchorishvili, Olgha Giorgishvili

Abstract:

Many-year investigations of the nature of damping creep of rigid bodies and materials led to the discovery of the fundamental character of this phenomenon. It occurs only when a rigid body comes in contact with a surface-active medium (liquid or gaseous), which brings about a decrease of the free surface energy of a rigid body as a result of adsorption, chemo-sorption or wetting. The reversibility of the process consists of a gradual disappearance of creep deformation when the action of a surface-active medium stops. To clarify the essence of processes, a physical model is constructed by using Griffith’s scheme and the well-known representation formulas of deformation origination and failure processes. The total creep deformation is caused by the formation and opening of microcracks throughout the material volume under the action of load. This supposedly happens in macroscopically homogeneous silicate and organic glasses, while in polycrystals (tuff, gypsum, steel) contacting with a surface-active medium micro crack are formed mainly on the grain boundaries. The creep of rubber is due to its swelling activated by stress. Acknowledgment: All experiments are financially supported by Shota Rustaveli National Science Foundation of Georgia. Study of Properties of Concretes (Both Ordinary and Compacted) Made of Local Building Materials and Containing Admixtures, and Their Further Introduction in Construction Operations and Road Building. DP2016_26. 22.12.2016.

Keywords: process reversibility, surface-active medium, Rebinder’s effect, micro crack, creep

Procedia PDF Downloads 135
423 Optimized Real Ground Motion Scaling for Vulnerability Assessment of Building Considering the Spectral Uncertainty and Shape

Authors: Chen Bo, Wen Zengping

Abstract:

Based on the results of previous studies, we focus on the research of real ground motion selection and scaling method for structural performance-based seismic evaluation using nonlinear dynamic analysis. The input of earthquake ground motion should be determined appropriately to make them compatible with the site-specific hazard level considered. Thus, an optimized selection and scaling method are established including the use of not only Monte Carlo simulation method to create the stochastic simulation spectrum considering the multivariate lognormal distribution of target spectrum, but also a spectral shape parameter. Its applications in structural fragility analysis are demonstrated through case studies. Compared to the previous scheme with no consideration of the uncertainty of target spectrum, the method shown here can make sure that the selected records are in good agreement with the median value, standard deviation and spectral correction of the target spectrum, and greatly reveal the uncertainty feature of site-specific hazard level. Meanwhile, it can help improve computational efficiency and matching accuracy. Given the important infection of target spectrum’s uncertainty on structural seismic fragility analysis, this work can provide the reasonable and reliable basis for structural seismic evaluation under scenario earthquake environment.

Keywords: ground motion selection, scaling method, seismic fragility analysis, spectral shape

Procedia PDF Downloads 292
422 Measuring the Extent of Equalization in Fiscal Transfers in India: An Index-Based Approach

Authors: Ragini Trehan, D.K. Srivastava

Abstract:

In the post-planning era, India’s fiscal transfers from the central to state governments are solely determined by the Finance Commissions (FCs). While in some of the well-established federations such as Australia, Canada, and Germany, equalization serves as the guiding principle of fiscal transfers and is constitutionally mandated, in India, it is not explicitly mandated, and FCs attempt to implement it indirectly by a combination of a formula-based share in the divisible pool of central taxes supplemented by a set of grants. In this context, it is important to measure the extent of equalization that is achieved through FC transfers with a view to improving the design of such transfers. This study uses an index-based methodology for measuring the degree of equalization achieved through FC-transfers covering the period from FC12 to the first year of FC15 spanning from 2005-06 to 2020-21. The ‘Index of Equalization’ shows that the extent of equalization has remained low in the range of 30% to 37% for the four Commission periods under review. The highest degree of equalization at 36.7% was witnessed in the FC12 period and the lowest equalization at 29.5% was achieved during the FC15(1) period. The equalizing efficiency of recommended transfers also shows a consistent fall from 11.4% in the FC12 period to 7.5% by the FC15 (1) period. Further, considering progressivity in fiscal transfers as a special case of equalizing transfers, this study shows that the scheme of per capita total transfers when determined using the equalization approach is more progressive and is characterized by minimal deviations as compared to the profile of transfers recommended by recent FCs.

Keywords: fiscal transfers, index of equalization, equalizing efficiency, fiscal capacity, expenditure needs, finance Commission, tax effort

Procedia PDF Downloads 74
421 Revolutionary Violence and Echoes of the «Thou Shalt Not Kill» Debate: A Tragic Reading of the Class Conflict in Colombia

Authors: Jaime Otavo

Abstract:

Oscar del Barco, a former member of Los Montoneros, an Argentine guerrilla group of the 1970s, published a letter in 2004 that sparked a heated debate in his country about revolutionary violence. Del Barco, on the subject of «No matarás» (Thou shalt not kill) –as this debate was known– wrote to Sergio Schmucler, his addressee, the following: "There is no 'ideal' that justifies the death of a man. The founding principle of any community is 'Thou shalt not kill'. Thou shalt not kill the man because every man is sacred, and every man is all men".In this paper, the «No matarás» debate will be used to problematize two interconnected ideas that, in Colombia, underpinned the use of revolutionary violence by the guerrilla movements that emerged in the 1970s. On the one hand, an anthropological optimism; on the other, a theological scheme of converting violence into justice. Based on this, two arguments are put forward: 1) that revolutionary violence arose from an ethical-political certainty, namely: the confidence in being on the right side of history (because the violent ones were others), but 2) that its persistence over time made visible a tragic element, that is, that the bipolarity between victim and executioner, good and evil, or friend and foe that is inscribed in the class struggle is a false dilemma for in the context of revolutionary violence –as in the context of Greek tragedy–, no one ever has to make a decision, nor can he do so. For this reason, it is maintained that the fundamental aspect about guerrilla violence in Colombia is that it imposed itself as a violence of negativity which not only exceeded the capacity of the extreme left to control its revolutionary praxis but also exploited the link with the political subjectivation to which it aspired, the proletariat as the gravedigger of the bourgeoisie.

Keywords: marxism, social movements, armed struggle, debate thou shalt not kill

Procedia PDF Downloads 81
420 Bringing the Confidence Intervals into Choropleth Mortality Map: An Example of Tainan, Taiwan

Authors: Tzu-Jung Tseng, Pei-Hsuen Han, Tsung-Hsueh Lu

Abstract:

Background: Choropleth mortality map is commonly used to identify areas with higher mortality risk. However, the use of choropleth map alone might result in the misinterpretation of differences in mortality rates between areas. Two areas with different color shades might not actually have a significant difference in mortality rates. The mortality rates estimated for an area with a small population would be less stable. We suggest of bringing the 95% confidence intervals (CI) into the choropleth mortality map to help users interpret the areal mortality rate difference more properly. Method: In the first choropleth mortality map, we used only three color to indicate standardized mortality ratio (SMR) for each district in Tainan, Taiwan. The red color denotes that the SMR of that district was significantly higher than the Tainan average; on the contrary, the green color suggests that the SMR of that district was significantly lower than the Tainan average. The yellow color indicates that the SMR of that district was not statistically significantly different from the Tainan average. In the second choropleth mortality map, we used traditional sequential color scheme (color ramp) for different SMR in 37 districts in Tainan City with bar chart of each SMR with 95% CI in which the users could examine if the line of 95% CI of SMR of two districts overlapped (nonsignificant difference). Results: The all-causes SMR of each district in Tainan for 2008 to 2013 ranged from 0.77 (95% CI 0.75 to 0.80) in East District to 1.39 Beimen (95% CI 1.25 to 1.52). In the first choropleth mortality map, only 16 of 37 districts had red color and 8 districts had green color. For different causes of death, the number of districts with red color differed. In the first choropleth mortality map we added a bar chart with line of 95% CI of SMR in each district, in which the users could visualize the SMR differences between districts. Conclusion: Through the use of 95% CI the users could interpret the aral mortality differences more properly.

Keywords: choropleth map, small area variation, standardized mortality ratio (SMR), Taiwan

Procedia PDF Downloads 325
419 Changing the Way South Africa Think about Parking Provision at Tertiary Institutions

Authors: M. C. Venter, G. Hitge, S. C. Krygsman, J. Thiart

Abstract:

For decades, South Africa has been planning transportation systems from a supply, rather than a demand side, perspective. In terms of parking, this relates to requiring the minimum parking provision that is enforced by city officials. Newer insight is starting to indicate that South Africa needs to re-think this philosophy in light of a new policy environment that desires a different outcome. Urban policies have shifted from reliance on the private car for access, to employing a wide range of alternative modes. Car dominated travel is influenced by various parameters, of which the availability and location of parking plays a significant role. The question is therefore, what is the right strategy to achieve the desired transport outcomes for SA. The focus of this paper is used to assess this issue with regard to parking provision, and specifically at a tertiary institution. A parking audit was conducted at the Stellenbosch campus of Stellenbosch University, monitoring occupancy at all 60 parking areas, every hour during business hours over a five-day period. The data from this survey was compared with the prescribed number of parking bays according to the Stellenbosch Municipality zoning scheme (requiring a minimum of 0.4 bays per student). The analysis shows that by providing 0.09 bays per student, the maximum total daily occupation of all the parking areas did not exceed an 80% occupation rate. It is concluded that the prevailing parking standards are not supportive of the new urban and transport policy environment, but that it is extremely conservative from a practical demand point of view.

Keywords: parking provision, parking requirements, travel behaviour, travel demand management

Procedia PDF Downloads 180
418 SISSLE in Consensus-Based Ripple: Some Improvements in Speed, Security, Last Mile Connectivity and Ease of Use

Authors: Mayank Mundhra, Chester Rebeiro

Abstract:

Cryptocurrencies are rapidly finding wide application in areas such as Real Time Gross Settlements and Payments Systems. Ripple is a cryptocurrency that has gained prominence with banks and payment providers. It solves the Byzantine General’s Problem with its Ripple Protocol Consensus Algorithm (RPCA), where each server maintains a list of servers, called Unique Node List (UNL) that represents the network for the server, and will not collectively defraud it. The server believes that the network has come to a consensus when members of the UNL come to a consensus on a transaction. In this paper we improve Ripple to achieve better speed, security, last mile connectivity and ease of use. We implement guidelines and automated systems for building and maintaining UNLs for resilience, robustness, improved security, and efficient information propagation. We enhance the system so as to ensure that each server receives information from across the whole network rather than just from the UNL members. We also introduce the paradigm of UNL overlap as a function of information propagation and the trust a server assigns to its own UNL. Our design not only reduces vulnerabilities such as eclipse attacks, but also makes it easier to identify malicious behaviour and entities attempting to fraudulently Double Spend or stall the system. We provide experimental evidence of the benefits of our approach over the current Ripple scheme. We observe ≥ 4.97x and 98.22x in speedup and success rate for information propagation respectively, and ≥ 3.16x and 51.70x in speedup and success rate in consensus.

Keywords: Ripple, Kelips, unique node list, consensus, information propagation

Procedia PDF Downloads 145
417 Coarse-Grained Molecular Simulations to Estimate Thermophysical Properties of Phase Equilibria

Authors: Hai Hoang, Thanh Xuan Nguyen Thi, Guillaume Galliero

Abstract:

Coarse-Grained (CG) molecular simulations have shown to be an efficient way to estimate thermophysical (static and dynamic) properties of fluids. Several strategies have been developed and reported in the literature for defining CG molecular models. Among them, those based on a top-down strategy (i.e. CG molecular models related to macroscopic observables), despite being heuristic, have increasingly gained attention. This is probably due to its simplicity in implementation and its ability to provide reasonable results for not only simple but also complex systems. Regarding simple Force-Fields associated with these CG molecular models, it has been found that the four parameters Mie chain model is one of the best compromises to describe thermophysical static properties (e.g. phase diagram, saturation pressure). However, parameterization procedures of these Mie-chain GC molecular models given in literature are generally insufficient to simultaneously provide static and dynamic (e.g. viscosity) properties. To deal with such situations, we have extended the corresponding states by using a quantity associated with the liquid viscosity. Results obtained from molecular simulations have shown that our approach is able to yield good estimates for both static and dynamic thermophysical properties for various real non-associating fluids. In addition, we will show that on simple (e.g. phase diagram, saturation pressure) and complex (e.g. thermodynamic response functions, thermodynamic energy potentials) static properties, results of our scheme generally provides improved results compared to existing approaches.

Keywords: coarse-grained model, mie potential, molecular simulations, thermophysical properties, phase equilibria

Procedia PDF Downloads 336
416 An Eulerian Method for Fluid-Structure Interaction Simulation Applied to Wave Damping by Elastic Structures

Authors: Julien Deborde, Thomas Milcent, Stéphane Glockner, Pierre Lubin

Abstract:

A fully Eulerian method is developed to solve the problem of fluid-elastic structure interactions based on a 1-fluid method. The interface between the fluid and the elastic structure is captured by a level set function, advected by the fluid velocity and solved with a WENO 5 scheme. The elastic deformations are computed in an Eulerian framework thanks to the backward characteristics. We use the Neo Hookean or Mooney Rivlin hyperelastic models and the elastic forces are incorporated as a source term in the incompressible Navier-Stokes equations. The velocity/pressure coupling is solved with a pressure-correction method and the equations are discretized by finite volume schemes on a Cartesian grid. The main difficulty resides in that large deformations in the fluid cause numerical instabilities. In order to avoid these problems, we use a re-initialization process for the level set and linear extrapolation of the backward characteristics. First, we verify and validate our approach on several test cases, including the benchmark of FSI proposed by Turek. Next, we apply this method to study the wave damping phenomenon which is a mean to reduce the waves impact on the coastline. So far, to our knowledge, only simulations with rigid or one dimensional elastic structure has been studied in the literature. We propose to place elastic structures on the seabed and we present results where 50 % of waves energy is absorbed.

Keywords: damping wave, Eulerian formulation, finite volume, fluid structure interaction, hyperelastic material

Procedia PDF Downloads 323
415 Coordinated Interference Canceling Algorithm for Uplink Massive Multiple Input Multiple Output Systems

Authors: Messaoud Eljamai, Sami Hidouri

Abstract:

Massive multiple-input multiple-output (MIMO) is an emerging technology for new cellular networks such as 5G systems. Its principle is to use many antennas per cell in order to maximize the network's spectral efficiency. Inter-cellular interference remains a fundamental problem. The use of massive MIMO will not derogate from the rule. It improves performances only when the number of antennas is significantly greater than the number of users. This, considerably, limits the networks spectral efficiency. In this paper, a coordinated detector for an uplink massive MIMO system is proposed in order to mitigate the inter-cellular interference. The proposed scheme combines the coordinated multipoint technique with an interference-cancelling algorithm. It requires the serving cell to send their received symbols, after processing, decision and error detection, to the interfered cells via a backhaul link. Each interfered cell is capable of eliminating intercellular interferences by generating and subtracting the user’s contribution from the received signal. The resulting signal is more reliable than the original received signal. This allows the uplink massive MIMO system to improve their performances dramatically. Simulation results show that the proposed detector improves system spectral efficiency compared to classical linear detectors.

Keywords: massive MIMO, COMP, interference canceling algorithm, spectral efficiency

Procedia PDF Downloads 147
414 Evaluation of Vehicle Classification Categories: Florida Case Study

Authors: Ren Moses, Jaqueline Masaki

Abstract:

This paper addresses the need for accurate and updated vehicle classification system through a thorough evaluation of vehicle class categories to identify errors arising from the existing system and proposing modifications. The data collected from two permanent traffic monitoring sites in Florida were used to evaluate the performance of the existing vehicle classification table. The vehicle data were collected and classified by the automatic vehicle classifier (AVC), and a video camera was used to obtain ground truth data. The Federal Highway Administration (FHWA) vehicle classification definitions were used to define vehicle classes from the video and compare them to the data generated by AVC in order to identify the sources of misclassification. Six types of errors were identified. Modifications were made in the classification table to improve the classification accuracy. The results of this study include the development of updated vehicle classification table with a reduction in total error by 5.1%, a step by step procedure to use for evaluation of vehicle classification studies and recommendations to improve FHWA 13-category rule set. The recommendations for the FHWA 13-category rule set indicate the need for the vehicle classification definitions in this scheme to be updated to reflect the distribution of current traffic. The presented results will be of interest to States’ transportation departments and consultants, researchers, engineers, designers, and planners who require accurate vehicle classification information for planning, designing and maintenance of transportation infrastructures.

Keywords: vehicle classification, traffic monitoring, pavement design, highway traffic

Procedia PDF Downloads 180
413 Atomic Town: History and Vernacular Heritage at the Mary Kathleen Uranium Mine in Australia

Authors: Erik Eklund

Abstract:

Mary Kathleen was a purpose-built company town located in northwest Queensland in Australia. It was created to work on a rich uranium deposit discovered in the area in July 1954. The town was complete by 1958, possessing curved streets, modern materials, and a progressive urban planning scheme. Formed in the minds of corporate executives and architects and made manifest in arid zone country between Cloncurry and Mount Isa, Mary Kathleen was a modern marvel in the outback, a town that tamed the wild country of northwest Queensland, or so it seemed. The town was also a product of the Cold War. In the context of a nuclear arms race between the Soviet Union and her allies, and the United States of America (USA) and her Allies, a rapid rush to locate, mine, and process uranium after 1944 led to the creation of uranium towns in Czechoslovakia, Canada, the Soviet Union, USA and Australia of which Mary Kathleen was one such example. Mary Kathleen closed in 1981, and most of the town’s infrastructure was removed. Since then, the town’s ghostly remains have attracted travellers and tourists. Never an officially-sanctioned tourist site, the area has nevertheless become a regular stop for campers and day trippers who have engaged with the site often without formal interpretation. This paper explores the status of this vernacular heritage and asks why it has not gained any official status and what visitors might see in the place despite its uncertain status.

Keywords: uranium mining, planned communities, official heritage, vernacular heritage, Australian history

Procedia PDF Downloads 89
412 Quasiperiodic Magnetic Chains as Spin Filters

Authors: Arunava Chakrabarti

Abstract:

A one-dimensional chain of magnetic atoms, representative of a quantum gas in an artificial quasi-periodic potential and modeled by the well-known Aubry-Andre function and its variants are studied in respect of its capability of working as a spin filter for arbitrary spins. The basic formulation is explained in terms of a perfectly periodic chain first, where it is shown that a definite correlation between the spin S of the incoming particles and the magnetic moment h of the substrate atoms can open up a gap in the energy spectrum. This is crucial for a spin filtering action. The simple one-dimensional chain is shown to be equivalent to a 2S+1 strand ladder network. This equivalence is exploited to work out the condition for the opening of gaps. The formulation is then applied for a one-dimensional chain with quasi-periodic variation in the site potentials, the magnetic moments and their orientations following an Aubry-Andre modulation and its variants. In addition, we show that a certain correlation between the system parameters can generate absolutely continuous bands in such systems populated by Bloch like extended wave functions only, signaling the possibility of a metal-insulator transition. This is a case of correlated disorder (a deterministic one), and the results provide a non-trivial variation to the famous Anderson localization problem. We have worked within a tight binding formalism and have presented explicit results for the spin half, spin one, three halves and spin five half particles incident on the magnetic chain to explain our scheme and the central results.

Keywords: Aubry-Andre model, correlated disorder, localization, spin filter

Procedia PDF Downloads 355
411 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 141
410 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID

Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis

Abstract:

Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.

Keywords: artificial intelligence, COVID, neural network, machine learning

Procedia PDF Downloads 93
409 Evaluation of Non-Staggered Body-Fitted Grid Based Solution Method in Application to Supercritical Fluid Flows

Authors: Suresh Sahu, Abhijeet M. Vaidya, Naresh K. Maheshwari

Abstract:

The efforts to understand the heat transfer behavior of supercritical water in supercritical water cooled reactor (SCWR) are ongoing worldwide to fulfill the future energy demand. The higher thermal efficiency of these reactors compared to a conventional nuclear reactor is one of the driving forces for attracting the attention of nuclear scientists. In this work, a solution procedure has been described for solving supercritical fluid flow problems in complex geometries. The solution procedure is based on non-staggered grid. All governing equations are discretized by finite volume method (FVM) in curvilinear coordinate system. Convective terms are discretized by first-order upwind scheme and central difference approximation has been used to discretize the diffusive parts. k-ε turbulence model with standard wall function has been employed. SIMPLE solution procedure has been implemented for the curvilinear coordinate system. Based on this solution method, 3-D Computational Fluid Dynamics (CFD) code has been developed. In order to demonstrate the capability of this CFD code in supercritical fluid flows, heat transfer to supercritical water in circular tubes has been considered as a test problem. Results obtained by code have been compared with experimental results reported in literature.

Keywords: curvilinear coordinate, body-fitted mesh, momentum interpolation, non-staggered grid, supercritical fluids

Procedia PDF Downloads 130
408 Exploring Subjective Simultaneous Mixed Emotion Experiences in Middle Childhood

Authors: Esther Burkitt

Abstract:

Background: Evidence is mounting that mixed emotions can be experienced simultaneously in different ways across the lifespan. Four types of patterns of simultaneously mixed emotions (sequential, prevalent, highly parallel, and inverse types) have been identified in middle childhood and adolescence. Moreover, the recognition of these experiences tends to develop firstly when children consider peers rather than the self. This evidence from children and adolescents is based on examining the presence of experiences specified in adulthood. The present study, therefore, applied an exhaustive coding scheme to investigate whether children experience types of previously unidentified simultaneous mixed emotional experiences. Methodology: One hundred and twenty children (60 girls) aged 7 years 1 month - 9 years 2 months (X=8 years 1 month; SD = 10 months) were recruited from mainstream schools across the UK. Two age groups were formed (youngest, n = 61, 7 years 1 month- 8 years 1 months: oldest, n = 59, 8 years 2 months – 9 years 2 months) and allocated to one of two conditions hearing vignettes describing happy and sad mixed emotion events in age and gender-matched protagonist or themselves. Results: Loglinear analyses identified new types of flexuous, vertical, and other experiences along with established sequential, prevalent, highly parallel, and inverse types of experience. Older children recognised more complex experiences other than the self-condition. Conclusion: Several additional types of simultaneously mixed emotions are recognised in middle childhood. The theoretical relevance of simultaneous mixed emotion processing in childhood is considered, and the potential utility of the findings in emotion assessments is discussed.

Keywords: emotion, childhood, self, other

Procedia PDF Downloads 78
407 Solving the Overheating on the Top Floor of Energy Efficient Houses: The Envelope Improvement

Authors: Sormeh Sharifi, Wasim Saman, Alemu Alemu, David Whaley

Abstract:

Although various energy rating schemes and compulsory building codes are using around the world, there are increasing reports on overheating in energy efficient dwellings. Given that the cooling demand of buildings is rising globally because of the climate change, it is more likely that the overheating issue will be observed more. This paper studied the summer indoor temperature in eight air-conditioned multi-level houses in Adelaide which have complied with the Australian Nationwide Houses Energy Rating Scheme (NatHERS) minimum energy performance of 7.5 stars. Through monitored temperature, this study explores that overheating is experienced on 75.5% of top floors during cooling periods while the air-conditioners were running. This paper found that the energy efficiency regulations have significantly improved thermal comfort in low floors, but not on top floors, and the energy-efficient house is not necessarily adapted with the air temperature fluctuations particularly on top floors. Based on the results, this study suggests that the envelope of top floors for multi-level houses in South Australian context need new criteria to make the top floor more heat resistance in order to: preventing the overheating, reducing the summer pick electricity demand and providing thermal comfort. Some methods are used to improve the envelope of the eight case studies. The results demonstrate that improving roofs was the most effective part of the top floors envelope in terms of reducing the overheating.

Keywords: building code, climate change, energy-efficient building, energy rating, overheating, thermal comfort

Procedia PDF Downloads 220
406 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 99
405 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene

Authors: Jigg Pelayo, Ricardo Villar

Abstract:

Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.

Keywords: algorithm, LiDAR, object recognition, OBIA

Procedia PDF Downloads 244
404 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks

Procedia PDF Downloads 232
403 Shear Stress and Effective Structural Stress ‎Fields of an Atherosclerotic Coronary Artery

Authors: Alireza Gholipour, Mergen H. Ghayesh, Anthony Zander, Stephen J. Nicholls, Peter J. Psaltis

Abstract:

A three-dimensional numerical model of an atherosclerotic coronary ‎artery is developed for the determination of high-risk situation and ‎hence heart attack prediction. Employing the finite element method ‎‎(FEM) using ANSYS, fluid-structure interaction (FSI) model of the ‎artery is constructed to determine the shear stress distribution as well ‎as the von Mises stress field. A flexible model for an atherosclerotic ‎coronary artery conveying pulsatile blood is developed incorporating ‎three-dimensionality, artery’s tapered shape via a linear function for ‎artery wall distribution, motion of the artery, blood viscosity via the ‎non-Newtonian flow theory, blood pulsation via use of one-period ‎heartbeat, hyperelasticity via the Mooney-Rivlin model, viscoelasticity ‎via the Prony series shear relaxation scheme, and micro-calcification ‎inside the plaque. The material properties used to relate the stress field ‎to the strain field have been extracted from clinical data from previous ‎in-vitro studies. The determined stress fields has potential to be used as ‎a predictive tool for plaque rupture and dissection.‎ The results show that stress concentration due to micro-calcification ‎increases the von Mises stress significantly; chance of developing a ‎crack inside the plaque increases. Moreover, the blood pulsation varies ‎the stress distribution substantially for some cases.‎

Keywords: atherosclerosis, fluid-structure interaction‎, coronary arteries‎, pulsatile flow

Procedia PDF Downloads 172
402 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna

Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov

Abstract:

This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.

Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna

Procedia PDF Downloads 283